New design sets a high standard for post-quantum readiness.

  • lemmee_in@lemmy.world
    link
    fedilink
    English
    arrow-up
    94
    ·
    1 day ago

    Signal puts a lot of effort into their threat model that assumes a hostile host (i.e. AWS). That’s the whole point of end to end encryption, even if the host is compromised the attackers do not get any information. They even go as far as padding out the lengths of encrypted messages so everyone looks like they are sending identical blocks of data

    • shortwavesurfer@lemmy.zip
      link
      fedilink
      English
      arrow-up
      41
      arrow-down
      1
      ·
      23 hours ago

      I’m assuming that they were more referring to the outage that occurred today that pulled a ton of the internet services, including signal offline temporarily.

      You can have all the encryption in the world, but if the centralized data point that allows you to access the service is down, then you’re fucked.

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        17 hours ago

        no matter where you host, outages are going to happen… AWS really doesn’t have many… it’s just that it’s so big that everyone notices - it causes internet-wide issues

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            Monero isn’t like the other three, it’s P2P with no single points of failure.

            I haven’t looked too closely at Nostr, but I’m assuming it’s typically federated with relays acting like Lemmy/Mastodon instances in terms of data storage (it’s a protocol, so I suppose posts could be local and switching relays is easy). If your instance goes down, you’re just as screwed as you would be with a centralized service, because Lemmy and Mastodon are centralized services that share data. If your instance doesn’t go down but a major one does, your experience will be significantly degraded.

            The only way to really solve this problem is with P2P services, like Monero, or to have sufficient diversity in your infrastructure that a single major failure doesn’t kill the service. P2P is easy for something like a currency, but much more difficult for social media where you expect some amount of moderation, and redundancy is expensive and also complex.

            • shortwavesurfer@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              Nostr is a weird being. You are correct that it is not peer-to-peer like Monero is. However, it’s not quite federated in the same way that ActivityPub is.

              When using Nostr clients, you actually publish your same data to like six different relays at the same time. It has the built-in assumption that some of those relays are going to be down at any given time and so by publishing to like six at once you get data redundancy.

          • Alaknár@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            8
            ·
            10 hours ago

            Come on, mate… Lemmy as a whole didn’t go down, but instances of Lemmy absolutely did go down. As they regularly do, because shit happens.

          • Pup Biru@aussie.zone
            link
            fedilink
            English
            arrow-up
            15
            ·
            edit-2
            12 hours ago

            that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account

            centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery

            • shortwavesurfer@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              Right, but even if individual instances go down, you don’t end up with headlines all over the world of half the internet being down. Because half the internet isn’t down, the network is self-healing. It temporarily blocks off the problem area, and then when the instance comes back, it resynchronizes and continues as normal.

              Services might be temporarily degraded, but not gone entirely.

              • Pup Biru@aussie.zone
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 hours ago

                but that’s a compromise… it’s not categorically better

                you can’t run a bank like you run distributed instances, for example

                services have different uptime requirements… this is perhaps the first time i’ve ever heard of signal having downtime, and the second time ever that i can remember there’s been a global AWS incident like this

                and not only that, but lemmy and every service you listed aren’t even close to the scale of their centralised counterparts. we just aren’t there with the knowledge for how to build these services to simply say that centralised services are always worse, less reliable, etc. twitter is the usual example of this. it seems really easy, and arguably you can build a microblogging service in about 30min, but to scale it to the size that it handles is incredibly difficult and involves a lot of computer science (not just software engineering)

      • heysoundude@eviltoast.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        That was my point. But as somebody else pointed out here, the difficulties in maintaining the degree of security we currently enjoy as Signal users starts to get eroded away

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      sending identical blocks of data

      Nitpicking here but assuming from the previous words in your comment that you mean blocks of data of identical length.

      Although it should be as if we are sending multiples of identical size, I suppose.

      Anyway, sorry for nitpicking.