• Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 minutes ago

    If Fakebook or ex-Twatter suddely have to remove all hate-, shit-, and Nazi posts, they would probably be rather … empty?

  • Rose@slrpnk.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 hours ago

    On the other hand, I think safe-harbour laws are very much necessary if we want the Internet to work for the positive good of the world. We want the companies to take reasonable precautions and act on problematic stuff if it crops up, but that’s probably enough.

    But on the other hand, jeez, have you seen what kind of discussion shitholes Facebook and Twitter have cultivated? If your company is being described as an accessory to genocide, maybe something has already gone horribly wrong.

  • MolecularCactus1324@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    9 hours ago

    I think liberal values like free speech, secularism, and tolerance might actually require defending from forces that abuse those values to destroy them. If this is the way to do it, then I think it’s becoming increasingly necessary. The fact of the matter is a lot of people are impressionable according to what they read and see. Society simply cannot function when there are malicious actors intentionally trying to spread divisive hate and misinformation. Make Facebook liable for hosting the kind of shit that led to January 6th.

  • gaspar_petersen@programming.dev
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    14 hours ago

    I’m not sure this is a good thing. How will small Brazilian websites and forums be able to comply with these regulations? Sure, Meta and Google can afford to spend millions on content moderation. I don’t know if all sites can. I wonder how it will affect Brazilian lemmy instances, for example.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 hours ago

      They won’t be able to. Tech laws in Brazil are incredibly archaic and non-sensical so this isn’t even registering with people cause Brazil is so far behind.

    • FreedomAdvocate
      link
      fedilink
      English
      arrow-up
      12
      ·
      13 hours ago

      Exactly. Imagine if Lemmy instance owners became legally responsible for everything that was posted on their server! Especially with the way federation works - instance admins would be having to de-federate from every other instance on day 1, and would basically have to approve every single comment on every single post to ensure they didn’t get in trouble with the law.

      This is a terrible idea. It means a single bad actor could bring down a small social media site by themself really easily just by spamming illegal content and reporting it to the police themselves.

      • The_v@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 hours ago

        The court’s decision also introduced the concept of systemic failure, which holds providers liable when they fail to adopt preventive measures or remove illegal content. Now, platforms will be expected to establish self-regulation policies, ensure transparency in their procedures, and adopt standardized practices.

        Pretty sure this would cover Lemmy and most traditional forums as long as they have a written policy and standards that are consistently enforced.

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    19 hours ago

    On the one hand, I’m against censorship. But on the other, every bit of content on Facebook and X should be removed and all their hardware run through industrial shredders. It’s quite the conundrum.

    • Peruvian_Skies@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      18 hours ago

      Censorship is bad, but Facebook and X’s entire business models revolve around spreading content that is at once false and inflammatory, either just to create engagement or for more malicious purposes, and they reach a huge portion of the population directly, including children, teenagers, the mentally ill and other vulnerable populations. This requires a new understanding of accountability for spreading information.

      I wouldn’t agree that it makes sense to hold a Mastodon instance responsible for what its users post, because they don’t have a financial incentive or the ability to promote misinformation at a massive scale. Twitter does. As Aristotle said, we must treat equals equally, and treat the unequal unequally according to the form and extent of their inequality.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        16 hours ago

        The algorithms optimized for engagement with no ethics was the point the world starts going downhill.

    • ViatorOmnium@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      11 hours ago

      The same way your right to wave your hands ends before they reach other people’s faces, free speech can’t include speech infringing on other people’s dignity (in the legal/philosophical sense).

      Regulating speech within this frame is as bad as stopping a bar fight by dragging the instigator away.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        9 hours ago

        free speech can’t include speech infringing on other people’s dignity

        Actually, that’s exactly what free speech is for. Nobody needs free speech to tell their neighbor their hair looks nice today.

        • ViatorOmnium@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          9 hours ago

          You are thinking about dignity and imagining a British aristocrat drinking tea from a fine porcelain cup with the pinky held up while plotting world domination. That’s the vernacular meaning of dignity.

          In the context of human rights, dignity is the natural right of every single person to be valued and respected for their own sake, and to be treated ethically.

          You don’t infringe it by saying that your government is being managed by incompetent or immoral people. You infringe it when you say women belong in the kitchen and not in the office, or that black people are naturally inferior to white people, or that gay people don’t have right to love who they love, etc.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      17 hours ago

      It’s not censorship to hold people accountable for making editorial decisions on media platforms, and as long as FB, Twitter, and others are weighting different kinds of content in their algorithms (which they are), they should be held accountable financially and legally for the consequences.

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 hours ago

    The main problem is that the platforms have a money incentive to keep spam and scam posts online. They pay meta and TikTok for boosts, the scam gets boosted, all done.

    Frankly, it seems like the problem could be solved by forcing the platforms to get a Know Your Customer level of information and putting that info on every boosted post, so people know who’s paying for that.

    Buuuut, it’s Brazil. Justice and fairness only ever happen as side effects from judges’ decisions

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      Its so easy to implement a half-decent KYC these days. Thers a bit of KYC already but it’s so basic any scammedr can get around it all easily.

      Meta in particular is so bad. I’ve been reporting straight up scam ads on threads for months now and they’re still there!

      • I Cast Fist@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        Last week, as this discussion was getting media attention, Meta said that “most of those posts only stay up for 4-8 hours before the (uploader) deletes it, before we’ve had time to review”

        Dunno how truthful that statement is, but that really shows how easy it is to game their system

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          19 minutes ago

          Yes but they should have manual approval of every ad. I don’t think thats too much to ask but somehow they’ve convinced some people to think that way.

  • FreedomAdvocate
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    18 hours ago

    Guess the people of Brazil will be losing access to social media platforms soon.

    • meyotch@slrpnk.net
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      18 hours ago

      Lucky people. They will have a period of mourning and then enter a cultural renaissance.

      • FreedomAdvocate
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        13 hours ago

        Nah, most likely the government that enacts these laws will be replaced soon enough by whoever promises to undo the laws.