It’s amazing what a difference a little bit of time can make: Two years after kicking off what looked to be a long-shot campaign to push back on the practice of shutting down server-dependent videogames once they’re no longer profitable, Stop Killing Games founder Ross Scott and organizer Moritz Katzner appeared in front of the European Parliament to present their case—and it seemed to go very well.
Digital Fairness Act: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14622-Digital-Fairness-Act/F33096034_en



Not only games. Goes for all electronics as well.
Sick of supporting your ‘old phones’? You’re required by law to disclose all binary blobs as source code to let somebody else pick it up the slack.
Feeling like bricking old Kindles? Fine, but users must be able to install alternative OS on your old device.
Not providing software updates for your TV anymore after you removed features? That’s your right, but so is the right of the effing device owner to install something else on it.
And it’s not just consumer electronics. (caugh John Deere caugh).
Not to be pro-corporate/anti-repair…but I feel I have to play devils-advocate here…
That sounds like a legal and security nightmare.
If you just give binary blobs and no sources, there’s no way to maintain the code/device long term. As exploits continue to be found in upstream dependencies, the hardware continues to become increasingly insecure.
But if the source needs to be released…I imagine that there are heaps of proprietary code that is still in use on “active” devices even after another model goes EoL…so if that code is released, there’s instantly thousands of nefarious eyes on it.
On top of the regular zero-days that are found out when a popular product reaches EoL.
I think that’s potentially a lot to ask of users. Will your technically-challenged great-Aunt switch to post-support build when her phone hits EoL, or will hackers be able to remote control her banking app and take away your inheritance before the community can even patch it (assuming there’s enough community support out there for an 8-year-old Galaxy A-series…)
Then there could also be licensed code that would need to be released as well…hence the legal nightmare.
Not saying it’s impossible…in fact, I greatly agree with your stance and stated position. Just saying that there are some blockers on this epic.
Security is constantly used as a guise for removing consumer rights and as someone who has been in the security industry for about 9 years I’m so sick of it.
First and foremost, everyone please understand: the user should be allowed to opt into your concept of insecurity: you do not know their threat model and you do not know their risk tolerance.
Using exploits in low level drivers in the wild is approaching APT level, and even if there were a simple one to use it’d likely be useless without some sort or local access to the device (bar some horror show bug in a Bluetooth or WiFi firmware). The risk is incredibly low for the average person. I’d put it pretty close to 0.
Wire transfers aren’t instant and for large sums (your inheritance) the banks will likely require more than just a request from your app. If the bank cares about that then they can also use the attestation APIs which would be more than sufficient, as much as I hate them.
This boogey man of the APT going after my technologically illiterate <family member> with nation state level exploits needs to die. Long ago we entered a new era of security where it just isn’t worth it to waste exploits. Especially when you can just text people and ask for their money and that works plenty well.
Security is not a valid reason to soft brick consumer devices at some arbitrary end of life date.
Agreed, but I think a framing or two is missing here, and it only applies to a subset, is that the people of the world shouldn’t have to deal with more/larger bot nets because these things haven’t been considered.
Another is just that the average great aunt isn’t opting into a concept of insecurity they’re simply ignorant to what threats there are. If it’s possible to distinguish between the two sets of people, or to maybe even bucket devices by potential threat, it might go a long away. I probably a lot wrong here, I just woke up.
But yeah, agreed security is an argument that’s hidden behind
Yes I’m not going to take some “survival of the fittest” nonsense approach to security: consumers need securely built devices and software. This is the first line of defense always: we need to make things secure and then have secure defaults according to whatever we decide “secure” means in the context of our widget or software. Then we need to provide “advanced” (or even just “ignorant but risk tolerant”) users with the ability to change the device or software to match their definition of “secure”.
The easiest example is secure boot. Your laptop likely has a key provided by your OEM and likely Microsoft’s key preinstalled. This is a valid “secure boot” path for the average user, provided your OEM and Microsoft don’t get compromised, which is APT territory. However you are provided with the ability to use a different key if you know how to do that. You have thus opted in to protecting your own private key but now you have more control over your device. This design is notably absent in phones, which is absolutely bananas and actually less secure in some threat models
You could extend examples like this if you wanted. One could easily imagine a device that does soft brick itself after the EOL date to simply protect people that are ignorant of the potential risks, but also provides an advanced user with the ability to revive it in a “less secure” state. The less advanced user will then have to either learn something new or buy a new device.
That is not a corporations problem who’s given away the rights to his product. That is my problem as an informed user, deciding that I know well enough about what I’m doing.
Security can’t be the constant reason for EoLs. Especially when there’s no real reason beyond the company needing the next cash cow.
This isn’t for the average user. My grandma isn’t gonna learn how to flash a custom firmware on her old phone. But an informed user can.
Right now, if your device has no more support, you can use it until something else changes and it becomes incompatible. Then you have a dead box that doesn’t do anything anymore, and simply because the company decided to no longer support it.
It’s about having the OPTION to use it in the future so the community can at least try to fix it.
Security by obscurity is a myth
No. It’s a valid tactic but needs to be part of a much broader strategy.
Absolute security is unachievable, but it is much harder to probe a black box to understand how it works than reading its entire manual.
Technically, I’d say its a stalling tactic, but yeah, by no means is it a sound, comprehensive strategy.
That implies any and all FOSS project should be getting exploited constantly, especially those being run by a community of hobbiests, and that is simply not the case.
They are exploited constantly. And fixed constantly.
There’s been a notable uptick in supply chain attacks coming from the odd FOSS dependency.
Fortunately the FOSS environment as a whole, ironically, reflects the best aspects of a “free market” in the capitalist sense. If a package is no longer maintained, or poorly maintained, or the maintainer is a douche/Russian asset, it forks and many users jump ship to the newer package.
Users have full transparency into how the sausage is made. Everybody does.
So if exploitable code is discovered, it can just as well be discovered first by a defensive researcher (non-inclusive term: white-hat) or offensive researcher (black-hat).
And if an offensive researcher discovers it first, they have a choice:
Submitting bad code to a project in itself though. Some new user with no reputation is going to be heavily scrutinized putting a PR on a large/popular project. And even with a good reputation, you’re still putting the exploit code out there in the open and hoping none of the reviewers or maintainers catch it.