• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Well, since they’re open models, that’s easy to fix. And has been.

    https://huggingface.co/perplexity-ai/r1-1776

    https://huggingface.co/microsoft/MAI-DS-R1

    Not to speak of finetunes that will do unspeakable things. It is not cost prohibitive or hard.

    You want to decensor “Open”AI though? Tough. They don’t even offer completion endpoints anymore!

    In other words, people shouldn’t and don’t have to use official Chinese releases if there is even the slightest concern of political censorship.

  • wampus@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 day ago

    America / Trump’s EO’s on AI basically say they need to tune their models to be as racist, or more racist, than Grok currently is.

    At least with China’s approach they seem to be ‘saying’ the right thing with regards to open sourcing it and having a more collaborative approach internationally. The USA and Trump is just “NO DEI AT ALL!!! MAKE IT SUPPORT DEAR LEADERS RIGHT THINK OR NO GOVT CONTRACTS FOR YOU!!! THIS IS NOT BIAS, THIS IS US REMOVING BIAS!!!”

  • OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    4
    ·
    2 days ago

    These responses closely mirrored examples of the false claim from pro-China sources, which alleged that Taiwan’s Democratic Progressive Party (DPP) was suppressing opposition voters by deliberately withholding voter notifications.

    There’s two things at play here. First, all models being released these days have safety built into the training. In the West, we might focus on preventing people from harming others or hacking, and in China, they’re preventing people from getting politically supportive of China. But in a way, we are all “exporting” our propaganda.

    Second, as called out in the article, these responses are clearly based on the training data. That is where the misinformation starts, and you can’t “fix” the problem without first fixing that data.

    • Pycorax@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      2 days ago

      In the West, we might focus on preventing people from harming others or hacking, and in China, they’re preventing people from getting politically supportive of China. But in a way, we are all “exporting” our propaganda.

      I don’t think anyone can say with a straight face that these 2 cases are both propaganda. So called “western ptopaganda” here is really just advising the user that maybe self harm, etc. is not such a good idea. It’s not explicitly telling the user completely unverifiable false facts.