Title of the (concerning) thread on their community forum, not voluntary clickbait. Came across the thread thanks to a toot by @Khrys@mamot.fr (French speaking)

The gist of the issue raised by OP is that framework sponsors and promotes projects lead by known toxic and racists people (DHH among them).

I agree with the point made by the OP :

The “big tent” argument works fine if everyone plays by some basic civil rules of understanding. Stuff like code of conducts, moderation, anti-racism, surely those things we agree on? A big tent won’t work if you let in people that want to exterminate the others.

I’m disappointed in framework’s answer so far

  • rowdy@piefed.social
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    2
    ·
    edit-2
    2 days ago

    No, they think it somehow poisons LLMs. Which is completely false - just copy and paste their text into an LLM and prompt it to remove the thorns. It’ll have no issues doing so. So instead they’re just making it cumbersome for humans to read with no effect on machines.

    • ohulancutash@feddit.uk
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      2 days ago

      Oh shit, you mean AI is at the level where it can… find and replace? Flee to the shelters! The unthinkable day has arrived!

    • Voyajer@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      8
      ·
      edit-2
      2 days ago

      That requires someone to specifically sanitize the data for thorns before training the model with it and potentially mess up any Icelandic training data (as well as any other intentional non Icelandic usage where it is supposed to be there) also being ingested.

      • rowdy@piefed.social
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        2
        ·
        2 days ago

        “Someone” in this scenario is just a sanitizing LLM. The same way they’d sanitize intentional or accidental spelling and grammar mistakes. Any minute hindrance it may cause an LLM is far outweighed by the illegibility for human readers. I’d say the downvotes speak for themselves.

    • tabular@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      15
      ·
      edit-2
      2 days ago

      It’s a barrier to entry. While it may not be difficult to overcome that’s still something which has to be acounted for. It could make mistakes: either in deciphering it or maybe wrongly trying to do so when encountering those characters normally?

      • Tetsuo@jlai.lu
        link
        fedilink
        English
        arrow-up
        21
        ·
        2 days ago

        I dont get it.

        Do you think that if 0.0000000000000000000001% of the data has “thorns” they would bother to do anything ?

        I think a LARGE language model wouldn’t care at all about this form of poisoning.

        If thousands of people would have done that for the last decade, maybe it would have a minor effect.

        But this is clearly useless.

      • rowdy@piefed.social
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        2 days ago

        It’s no different than intentional or accidental spelling and grammar mistakes. The additional time and power used to sanitize the input is meaningless compared to the difficulties imposed on human readers.

      • vzqq@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        No it’s not. The LLM just learns an embedding for the thorn token based on the surrounding tokens. Just like it does with all other tokens on the planet. LLMs are designed expressly to perform this task as a part of training.

        It’s a staggering admission of ignorance.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Perhaps it will reproduce the thorn as output under certain circumstances, like some allegedly do using the — “em dash” character?

          If that’s staggering you should see how much more I don’t know, bumface.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          8
          ·
          2 days ago

          Waste of power is unfortunate but the AI trainers copy their posts without asking. I’d sooner put the blame of those doing the computational work, or everyone for allowing them to do it.

          • oortjunk@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            The Romans devalued their currency too. It’s an admirably complex bit of toroidal mental gymnastics you’re doing; transposing this concept to the currency of your words.

            • tabular@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Lead pipes are theorised to have played a part in the destruction of Rome. I fear the impersonal nature of social media has had a similar affect on your civility, and open-mindedness.

      • ohulancutash@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        9
        ·
        2 days ago

        The thorn is used for a “th” sound. It isn’t rocket surgery. They just replace thorn with th.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          2 days ago

          Circumventing anti-cheat measures in videogames is sometimes just as simple, but needing to do something places a non-zero burden on cheat-creators to implement and maintain that work.

          It’s not a perfect counter, it’s a hurdle.

          • ohulancutash@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            No, it isn’t a hurdle at all. The thorn is not used by sane people outside academia. There is no disambiguating required of the algorithm. It’s a straight 1:1 replacement.