Signal puts a lot of effort into their threat model that assumes a hostile host (i.e. AWS). That’s the whole point of end to end encryption, even if the host is compromised the attackers do not get any information. They even go as far as padding out the lengths of encrypted messages so everyone looks like they are sending identical blocks of data
I’m assuming that they were more referring to the outage that occurred today that pulled a ton of the internet services, including signal offline temporarily.
You can have all the encryption in the world, but if the centralized data point that allows you to access the service is down, then you’re fucked.
no matter where you host, outages are going to happen… AWS really doesn’t have many… it’s just that it’s so big that everyone notices - it causes internet-wide issues
Monero isn’t like the other three, it’s P2P with no single points of failure.
I haven’t looked too closely at Nostr, but I’m assuming it’s typically federated with relays acting like Lemmy/Mastodon instances in terms of data storage (it’s a protocol, so I suppose posts could be local and switching relays is easy). If your instance goes down, you’re just as screwed as you would be with a centralized service, because Lemmy and Mastodon are centralized services that share data. If your instance doesn’t go down but a major one does, your experience will be significantly degraded.
The only way to really solve this problem is with P2P services, like Monero, or to have sufficient diversity in your infrastructure that a single major failure doesn’t kill the service. P2P is easy for something like a currency, but much more difficult for social media where you expect some amount of moderation, and redundancy is expensive and also complex.
Nostr is a weird being. You are correct that it is not peer-to-peer like Monero is. However, it’s not quite federated in the same way that ActivityPub is.
When using Nostr clients, you actually publish your same data to like six different relays at the same time. It has the built-in assumption that some of those relays are going to be down at any given time and so by publishing to like six at once you get data redundancy.
that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account
centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery
Right, but even if individual instances go down, you don’t end up with headlines all over the world of half the internet being down. Because half the internet isn’t down, the network is self-healing. It temporarily blocks off the problem area, and then when the instance comes back, it resynchronizes and continues as normal.
Services might be temporarily degraded, but not gone entirely.
but that’s a compromise… it’s not categorically better
you can’t run a bank like you run distributed instances, for example
services have different uptime requirements… this is perhaps the first time i’ve ever heard of signal having downtime, and the second time ever that i can remember there’s been a global AWS incident like this
and not only that, but lemmy and every service you listed aren’t even close to the scale of their centralised counterparts. we just aren’t there with the knowledge for how to build these services to simply say that centralised services are always worse, less reliable, etc. twitter is the usual example of this. it seems really easy, and arguably you can build a microblogging service in about 30min, but to scale it to the size that it handles is incredibly difficult and involves a lot of computer science (not just software engineering)
That was my point. But as somebody else pointed out here, the difficulties in maintaining the degree of security we currently enjoy as Signal users starts to get eroded away
Matrix tried for quite a while to get interoperability, but signal is just too paranoid about distributed hosting or interoperability of their software/protocol. it’s quite annoying
I did, it’s a buggy undercooked mess that doesn’t work half the time. The app that’s officially supported is missing half the features. Trying to get people to switch to it is like pulling teeth as the onboarding process in overly complicated for the average user.
Meanwhile Signal works right out of the box with very little fuss.
I could. Presumably so could the others commenting on this post. But then what are we to do about the privacy or tech illiterate people we’ve carried to Signal over the years?
It’s easy to winge about just doing what you perceive as the optimal solution. It’s more difficult when you need to navigate the path to get there from where we are now.
I guess the research doesn’t have to be limited to signal. If other apps can benefit from it the more resilient “private communications over the internet” get.
It’s not completely out yet. That was likely AWS being down.
Also, the new quantum protected message encryption headers are about 2kb. If that’s causing issues with your internet, you may want to consider looking at new internet.
2kb? While it may not sound like much, that’s at least three packets worth of data (depending on MTU). If you think about it in terms of how TCP sends packets and needs ACKs, there’s actually a lot of round trip data processing going on for just that one part.
Should have been pretty obvious to anyone reading any tech news whatsoever today, especially in the context of where you responded. No apology from you should have been necessary!
Great. Now we just have to get Signal off AWS and we be good.
Signal puts a lot of effort into their threat model that assumes a hostile host (i.e. AWS). That’s the whole point of end to end encryption, even if the host is compromised the attackers do not get any information. They even go as far as padding out the lengths of encrypted messages so everyone looks like they are sending identical blocks of data
I’m assuming that they were more referring to the outage that occurred today that pulled a ton of the internet services, including signal offline temporarily.
You can have all the encryption in the world, but if the centralized data point that allows you to access the service is down, then you’re fucked.
no matter where you host, outages are going to happen… AWS really doesn’t have many… it’s just that it’s so big that everyone notices - it causes internet-wide issues
Monero, Nostr, Lemmy, and Mastodon did not go down. Why? Because they are decentralized
Monero isn’t like the other three, it’s P2P with no single points of failure.
I haven’t looked too closely at Nostr, but I’m assuming it’s typically federated with relays acting like Lemmy/Mastodon instances in terms of data storage (it’s a protocol, so I suppose posts could be local and switching relays is easy). If your instance goes down, you’re just as screwed as you would be with a centralized service, because Lemmy and Mastodon are centralized services that share data. If your instance doesn’t go down but a major one does, your experience will be significantly degraded.
The only way to really solve this problem is with P2P services, like Monero, or to have sufficient diversity in your infrastructure that a single major failure doesn’t kill the service. P2P is easy for something like a currency, but much more difficult for social media where you expect some amount of moderation, and redundancy is expensive and also complex.
Nostr is a weird being. You are correct that it is not peer-to-peer like Monero is. However, it’s not quite federated in the same way that ActivityPub is.
When using Nostr clients, you actually publish your same data to like six different relays at the same time. It has the built-in assumption that some of those relays are going to be down at any given time and so by publishing to like six at once you get data redundancy.
Come on, mate… Lemmy as a whole didn’t go down, but instances of Lemmy absolutely did go down. As they regularly do, because shit happens.
that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account
centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery
Right, but even if individual instances go down, you don’t end up with headlines all over the world of half the internet being down. Because half the internet isn’t down, the network is self-healing. It temporarily blocks off the problem area, and then when the instance comes back, it resynchronizes and continues as normal.
Services might be temporarily degraded, but not gone entirely.
but that’s a compromise… it’s not categorically better
you can’t run a bank like you run distributed instances, for example
services have different uptime requirements… this is perhaps the first time i’ve ever heard of signal having downtime, and the second time ever that i can remember there’s been a global AWS incident like this
and not only that, but lemmy and every service you listed aren’t even close to the scale of their centralised counterparts. we just aren’t there with the knowledge for how to build these services to simply say that centralised services are always worse, less reliable, etc. twitter is the usual example of this. it seems really easy, and arguably you can build a microblogging service in about 30min, but to scale it to the size that it handles is incredibly difficult and involves a lot of computer science (not just software engineering)
That was my point. But as somebody else pointed out here, the difficulties in maintaining the degree of security we currently enjoy as Signal users starts to get eroded away
Padding isn’t anything special. Most practical uses of block ciphers require it.
Nitpicking here but assuming from the previous words in your comment that you mean blocks of data of identical length.
Although it should be as if we are sending multiples of identical size, I suppose.
Anyway, sorry for nitpicking.
or federated server
Would be very cool to be able to host a Signal homeserver.
https://signal.org/blog/the-ecosystem-is-moving/ here is Moxi’s take on that (former Signal CEO).
So I don’t think it’s happening.
they won’t do that.
Matrix tried for quite a while to get interoperability, but signal is just too paranoid about distributed hosting or interoperability of their software/protocol. it’s quite annoying
And yet simplex exists.
https://simplex.chat/
Wait, simplex isn’t paid?
No, it’s totally free and open source, and you can host it on your own server if you wish.
Just use Matrix…
I did, it’s a buggy undercooked mess that doesn’t work half the time. The app that’s officially supported is missing half the features. Trying to get people to switch to it is like pulling teeth as the onboarding process in overly complicated for the average user.
Meanwhile Signal works right out of the box with very little fuss.
I could. Presumably so could the others commenting on this post. But then what are we to do about the privacy or tech illiterate people we’ve carried to Signal over the years?
It’s easy to winge about just doing what you perceive as the optimal solution. It’s more difficult when you need to navigate the path to get there from where we are now.
No
I guess the research doesn’t have to be limited to signal. If other apps can benefit from it the more resilient “private communications over the internet” get.
So that’s why Signal didn’t send my messages very quickly today then, maybe.
It’s not completely out yet. That was likely AWS being down.
Also, the new quantum protected message encryption headers are about 2kb. If that’s causing issues with your internet, you may want to consider looking at new internet.
The average for a person sending / receiving a message is about 35 / day. That’s 70kb / person.
Signal has aprx. 100 million users.
Which means this adds about 7 terabytes daily.
Just doing the math on it, there’s no point to this message 😁
2kb? While it may not sound like much, that’s at least three packets worth of data (depending on MTU). If you think about it in terms of how TCP sends packets and needs ACKs, there’s actually a lot of round trip data processing going on for just that one part.
TCP will generally send up to 10 packets immediately without waiting for the ACKs (depending on the configured window size).
Generally any messages or websites under 14kb will be transmitted in a single round-trip assuming no packets are dropped.
Sorry, yeah, that’s the only thing I was referring to.
My internet connection is 500/500 Mbps, and I can’t change it. 😄👍
Should have been pretty obvious to anyone reading any tech news whatsoever today, especially in the context of where you responded. No apology from you should have been necessary!
You would think 😅 The sorry was sightly sarcastic, but shhh, nobody need know