The Security of Open Source vs Closed Source Software

The Security of Open Source vs Closed Source Software

When I first released SimpleRisk as a free tool back in March of 2013, I decided to license it under the open source Mozilla Public License 2.0.  There were a number of reasons why I did this.  The primary reason was that I wanted to give potential users confidence in knowing that they could use that open source code without having to worry about us coming back at them with licensing demands.  Each company who downloads our SimpleRisk Core is free to not only use the product, but may also build onto it for their own use.  While we continue to develop and license new plug-and-play "Extras" that work alongside that core functionality, we intend for the SimpleRisk Core to always be free and open source.

The other main reason why we licensed the SimpleRisk Core under the MPL 2.0, is because our PHP source code is already freely accessible anyway.  While there are a number of ways to compile PHP code, much like is done with software written in Java or C, we've always felt that giving customers our source code provides a level of transparency that you don't get with many enterprise applications.  Curious how some functionality works in SimpleRisk?  Just look at the code for it.  We've even included a bunch of comments in the code to help you understand why we do what we do.  This makes interacting with our more technical customers far simpler and we've had a number of bug fixes and feature contributions to the product, as a result.

This leads me to the topic of this blog post: security.  There are many who would argue that closed source is invariably more secure by nature, since attackers can't review the code for bugs.  The problem with this argument is that it cuts both ways.  While it is true that the bad guys don't have access to their source code, it means that the good guys don't have access to it either.  With the exception of some advanced decompiling techniques, the application itself becomes a black box.  You can feed data in and get data out, but the internal workings may as well be magic, and you're left with guessing at how everything works together.  As I mentioned above, with open source there is no guessing.  It's all right there in the open, which means that whole communities spring up around analyzing code for security vulnerabilities and even providing guidance on how to fix them.  We run a bug bounty program through HackerOne, enabling us to have far more insight into our application security than we ever would trying to do it all ourselves.  In addition, as a customer you have the ability to run your own static and dynamic analysis tools against our code.  I had an old manager who used to use the phrase "Trust, but verify" all the time.  You may trust the code from Microsoft or Cisco, as an example, but can you verify that it is secure?

So, which is more secure, open source or closed source?  The answer is both...or neither...depending on how you look at it.  It comes down to the organization writing the code and the efforts they put into their security development lifecycle.  But, as the last Microsoft patch Tuesday vulnerabilities show, even companies spending millions of dollars on security will still have bugs.  The real question is how quickly an organization can find those issues and how quickly they are able to address them.  

closed license open risk security simple SimpleRisk source