The Trump administration recently published “America’s AI Action Plan”. One of the first policy actions from the document is to eliminate references to misinformation, diversity, equity, inclusion, and climate change from the NIST’s AI Risk Framework.

Lacking any sense of irony, the very next point states LLM developers should ensure their systems are “objective and free from top-down ideological bias”.

Par for the course for Trump and his cronies, but the world should know what kind of AI the US wants to build.

  • Tony Bark@pawb.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Fair enough. That being said, deepfake services doesn’t need to be open source. Anything that presents to the masses is obviously going to be enforced but that doesn’t necessarily translate back to the open source supply chain.

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I’m not sure if this has happened yet, but in theory the TAKE IT DOWN act could be used to shutdown an open source deepfake code or model repository. In that case you’re right that there will be copies that spring up, but I think it is significant that popular projects could be taken down like that.

      • Tony Bark@pawb.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Glad we got that out of the way. Now can we get back to Trump’s EO so I can stop feeling like I’m feeling devil’s advocate here?

        • brianpeiris@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Sure. My understanding of your original comment was that Trump’s EO won’t matter to open source AI because regulations, or lack of regulations, won’t affect open source AI. My point is that regulations do affect open source AI in some significant way.