In the dark old days of the late 1990s and early 2000s, debates would rage about whether open source software is as good as proprietary software. And it was all a matter of opinion.
Then, in 2006, the Department of Homeland Security partnered with a software code analysis company called Coverity to examine open source code for security vulnerabilities and software defects. Each year since, Coverity has published a report on the quality of open source code, and each year, the company has found that it isn’t that different from proprietary software. That seemed to settle the issue.
But the latest report, published on Wednesday, found something new: the code quality of open source projects tends to suffer when they surpass 1 million lines of code, whereas proprietary code bases continue improve when they pass that mark.
The Coverity Scan tool performs automated static analysis of code bases, looking for defects such as resource leaks, illegal memory access, and control flow issues. It’s free for open source projects and available to proprietary software vendors for a fee. Coverity drew on its user base for the report, analyzing 118 active open source projects and 250 proprietary projects.