I support free and open source software (FOSS) like VLC, Qbittorrent, Libre Office, Gimp…

But why do people say that it’s as secure or more secure than closed source software? From what I understand, closed source software don’t disclose their code.

If you want to see the source code of Photoshop, you actually need to work for Adobe. Otherwise, you need to be some kind of freaking retro-engineering expert.

But open source has their code available to the entire world on Github or Gitlab.

Isn’t that actually also helping hackers?

  • lucullus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    ·
    1 hour ago

    Otherwise, you need to be some kind of freaking retro-engineering expert.

    Nah, often software is stupidly easy to breach. Often its an openly accessable database (like recently with the Tea app), or that you can pull other data from the webapp just by incrementing or decrementing the ID in your webrequest (that commonly happened with quite a number of digital contact tracing platforms used during Covid).

    Very often the closed source just obscures the screaming security issues.

    And yeah, there are not enough people to thorouhly audit all the open source code. But there are more people doing that, than you think. And another thing to mind is, that reporting a security problem with a software/service can get you in serious legal trouble depending on your jurisdicting - justified or not. Corporations won’t hesitate to slap suit you out of existance, if they can hide the problems that way. With open source software you typically don’t have any problems like this, since collaboration and transparency is more baked in into it.

  • steeznson@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 hours ago

    There isn’t a clear divide between open source software and proprietary software anymore due to how complex modern applications are. Proprietary software is typically built on top of open source libraries: Python’s Django web framework, OpenSSL, xz-utils, etc. Basically there isn’t anything safe, and even if you wrote it yourself you could introduce bugs or supply-chain attacks from dependencies.

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    4 hours ago

    Others have mentioned this, but to make sure all context is clear:

    • FOSS software is not inherently more secure.
    • New FOSS software is probably as secure as any closed source software, because it likely doesn’t have many eyes on it and hasn’t been audited.
    • Mature FOSS software will likely have more CVEs reported against it than a closed source alternative, because there are more eyes on it.
    • Because of bullet 3, mature FOSS software is typically more secure than closed source, as security holes are found and patched publicly.
    • This does not mean a particular closed source tool is insecure, it means the community can’t prove it is secure.
    • I like proof, so I choose FOSS.
    • Most people agree, which is why most major server software is FOSS (or source available)
    • However that’s also because of the permissive licensing.
  • Captain Aggravated@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    42
    ·
    15 hours ago

    You live in some Detroit-like hellscape where everyone everywhere 24/7 wants to kill and eat you and your family. You go shopping for a deadbolt for your front door, and encounter two locksmiths:

    Locksmith #1 says “I have invented my own kind of lock. I haven’t told anyone how it works, the lock picking community doesn’t know shit about this lock. It is a carefully guarded secret, only I am allowed to know the secret recipe of how this lock works.”

    Locksmith #2 says "Okay so the best lock we’ve got was designed in the 1980’s, the design is well known, the blueprints are publicly available, the locksport and various bad guy communities have had these locks for decades, and the few attacks that they made work were fixed by the manufacturer so they don’t work anymore. Nobody has demonstrated a successful attack on the current revision of this lock in the last 16 years.

    Which lock are you going to buy?

  • CrazyLikeGollum@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    16 hours ago

    It’s not “assumed to be secure.” The source code being publicly available means you (or anyone else) can audit that code for vulnerabilities. The publicly available issue tracking and change tracking means you can look through bug reports and see if anyone else has found vulnerabilities and you can, through the change history and the bug report history, see how the devs responded to issues in the past, how they fixed it, and whether or not they take security seriously.

    Open source software is not assumed to be more secure, but it’s security (or lack thereof) is much easier to verify, you don’t have to take the word of the dev as to whether or not it is secure, and (especially for the more popular projects like the ones you listed) you have thousands of people with different backgrounds and varying specialties within programming, with no affiliation with and no reason to trust the project doing independent audits of the code.

  • Lemvi@lemmy.sdf.org
    link
    fedilink
    arrow-up
    154
    ·
    1 day ago

    The code being public helps with spotting issues or backdoors.

    In practice, “security by obscurity” doesn’t really work. The code’s security should hinge on the quality of the code itself, not on the amount of people that know it.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      73
      ·
      1 day ago

      It also provides some assurance that the service/project/company is doing what they say they are, instead of “trust us”.

      Meta has deployed code so criminal that everyone who knew about it should be serving hard jail time (if we didn’t live in corporate dictatorships). If their code were public they couldn’t pull shit like this anywhere near as easily.

    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      arrow-up
      41
      ·
      1 day ago

      Yuup. “security by obscurity” relies on the attacker not understanding how software works. Problem is, hackers usually know how software works so that barrier is almost non existent.

    • bamboo@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      13
      ·
      23 hours ago

      The code being public helps with spotting issues or backdoors.

      A recent example of this is to see the extent that the TALOS group had to do to reverse engineer Dell ControlVault impacting hundreds of models of Dell laptops. This blog post goes through all of the steps they had to take to reverse engineer things, and they note fortunately there was some Linux support with publicly available shared objects with debug symbols, that helped them reverse the ecosystem. Dell has all this source code, and could have identified these issues much more easily themselves, but didn’t and shipped an insecure product leaving the customers vulnerable.

  • DeathByBigSad@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    ·
    17 hours ago

    Because “some nerd out there probably would have found any exploits for the X years its been released” is the general assumption about open source software.

    • bestboyfriendintheworld@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      9
      ·
      18 hours ago

      You theoretically can see the code. You don’t actually look at it. Nor can you even have the knowledge to understand and see security implications for all the software you use.

      In practice it makes little difference for security if you use open or closed source software.

      • Grenfur@pawb.social
        link
        fedilink
        arrow-up
        16
        ·
        17 hours ago

        No, you literally can see the code, that’s why it’s open source. YOU may not look at it, but people do. Random people, complete strangers, unpaid and un-vested in the project. The alternative is a company, who pays people to say “Yeah it’s totally safe”. That conflict of interest is problematic. Also, depending on what it’s written in, yes, I do sometimes take the time. Perhaps not for every single thing I run, but any time I run across niche projects, I read first. To claim that someone can’t understand is wild. That’s a stranger on the internet, you’re knowledge of their expertise is 0.

        In practice, 1,000 random people with no reason to “trust you, bro” on the internet being able to audit every change you make to your code is far more trustworthy than a handful of people paid by the company they represent. What’s worse, is that if Microsoft were to have a breach, then like maybe 10 people on the planet know about it. 10 people with jobs, mortgages, and families tied to that knowledge. They won’t say shit, because they can’t lose that paycheck. Compare that to say the XZ backdoor where the source is available and gets announced so people know exactly who what and where to resolve the issue.

  • Scott@sh.itjust.works
    link
    fedilink
    arrow-up
    22
    ·
    21 hours ago

    With open source code you get more eyes on it. Issues get fixed quicker.

    With closed source, such as Photoshop, only Adobe can see the code. Maybe there are issues there that could be fixed. Most large companies have a financial interest in having “good enough” security.

  • philpo@feddit.org
    link
    fedilink
    arrow-up
    18
    ·
    20 hours ago

    One thing people tend to overlook is: Development costs money. Fixing bugs and exploits costs money.

    In a closed source application none will see that your software is still working with arcane concepts that weren’t even state-of-the-art when written 25 years ago. The bug that could easily be used as an exploit? Sure, the developer responsible for it did inform his manager around 50 times he needs time and someone from the database team to fix it. And got turned down 50 times as it costs time and “we have to keep deadlines! And none noticed this bug so far,so why should now notice now?”

    • bestboyfriendintheworld@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      18 hours ago

      Lots of open source software uses arcane concepts because lots of it is old. See Xorg as a prime example. That was outdated 20 years ago already.

      Closes source software gets exploited and hacked all the time. They take security seriously as well.

      Look at OpenSSL and the heartbleed and similar high profile security failures for how even using high profile open source software is not automatically more secure.

      • philpo@feddit.org
        link
        fedilink
        arrow-up
        5
        ·
        17 hours ago

        You didn’t get my point: On Open Source people know. People know that Xorg is using arcane concepts and as a client you can pay someone to get through the code. Or a governmental institution can. (And yes, mine does with public reports)

        This is not the case with closed sources. You will only know when someone has exploited it. And while closed source applications like Windows,Office,etc. are having enough public weight that a lot of people with good intentions see them as a “challenge” and test for exploits. This is already not the case for smaller,but often critical applications. And no,most commercial closed source applications don’t give a fuck about security - even in critical infrastructure. I worked as a PM for these applications in the past and my company now consults for critical infrastructure. The status of security in niche applications is abhorrent. The longest running major exploit I stumbled upon was 22 years old. And left around 65% of all water treatment plants of a smaller nation at risk. (It’s fixed now. Not because they wanted to, but because someone forced them to)

  • assembly@lemmy.world
    link
    fedilink
    arrow-up
    29
    ·
    23 hours ago

    One thing to keep in mind is that NO CODE is believed to be secure…regardless of open source or closed source. The difference is that a lot of folk can audit open source whereas we all have to take the word of private companies who are constantly reducing headcount and replacing devs with AI when it comes to closed source.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    18
    ·
    21 hours ago

    Somewhat of a different take from what I’ve seen from the other comments. In my opinion, the main reason is this:
    XKCD comic showing other engineers proud of the realibility of their products and then software engineers freaking out about the concept of computerized voting, because they absolute do not trust their entire field.

    Companies have basically two reasons to do safety/security: Brand image and legal regulations.
    And they have a reason to not do safety/security: Cost pressure.

    Now imagine a field where there’s hardly any regulations and you don’t really stand out when you do security badly. Then the cost pressure means you just won’t do much security.

    That’s the software engineering field.

    Now compare that to open-source. I’d argue a solid chunk of its good reputation is from hobby projects, where people have no cost pressure and can therefore take all the time to do security justice.
    In particular, you need to remember that most security vulnerabilities are just regular bugs that happen to be exploitable. I have significantly fewer bugs in my hobby projects than in the commercial projects I work on, because there’s no pressure to meet deadlines.

    And frankly, the brand image applies even to open-source. I will write shitty code, if you pay me to. But if my name is published along with it, you need to pay me significantly more. So, even if it is a commercial project that happens to be published under an open-source license, I will not accept as many compromises to meet deadlines.

  • TabbsTheBat@pawb.social
    link
    fedilink
    arrow-up
    43
    ·
    1 day ago

    It’s because anyone can find and report vulnerabilities, while closed source could have some issue behind closed doors and not mention that data is at risk even if they knew

  • Canaconda@lemmy.ca
    link
    fedilink
    arrow-up
    29
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Zero day exploits, aka vulnerabilities that aren’t publicly known, offer hackers the ability to essentially rob people blind.

    Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities. So while it’s not inherently more secure, it is in practice.

    Exploiting four zero-day flaws in the systems,[8] Stuxnet functions by targeting machines using the Microsoft Windows operating system and networks, then seeking out Siemens Step7 software. Stuxnet reportedly compromised Iranian PLCs, collecting information on industrial systems and causing the fast-spinning centrifuges to tear themselves apart.[3] Stuxnet’s design and architecture are not domain-specific and it could be tailored as a platform for attacking modern SCADA and PLC systems (e.g., in factory assembly lines or power plants), most of which are in Europe, Japan and the United States.[9] Stuxnet reportedly destroyed almost one-fifth of Iran’s nuclear centrifuges.[10] Targeting industrial control systems, the worm infected over 200,000 computers and caused 1,000 machines to physically degrade.

    Stuxnet has three modules: a worm that executes all routines related to the main payload of the attack, a link file that automatically executes the propagated copies of the worm and a rootkit component responsible for hiding all malicious files and processes to prevent detection of Stuxnet.

    Wikipedia - Stuxnet Worm

    • CompactFlax@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      24 hours ago

      “Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities.”

      Heartbleed has entered the chat

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      4
      ·
      23 hours ago

      The whole Stuxnet story is fascinating. A virus designed to spread to the whole Internet, and then activate inside a specific Iranian facility. Convinced me that we already live in a cyberpunk world.