Posted on 07/02/2003 4:41:01 PM PDT by for-q-clinton
Open source code as flawed as proprietary: Study By Stephen Shankland, CNET News.com Wednesday, July 2 2003 9:54 AM
The source code for a newer version of the Apache Web server software is of the same quality as that of proprietary competitors at a similar stage of development, a new study has found.
The review compared version 2.1 of the Apache Web server software, which is used to house Web sites, with several commercial packages that handle the same chores. Reasoning, a company whose business is analyzing code quality, compared the recently released version with code of competitors at a similar stage of development.
The study found 0.53 defects per thousand lines of code for Apache, compared with 0.51 for the commercial software, on average.
The comparable defect rate indicates that open-source software starts out as raw as proprietary software, but Reasoning said that ultimately open-source software has the potential to exceed proprietary software in quality. That's significant given the increasingly widespread use of open-source software such as Linux, OpenOffice desktop suite and the MySQL database.
"The open-source code seems to start at the same defect rate for early commercial code as well," Jeff Klagenberg, director of project management, said in an interview. "Over time, it can gain higher levels of quality. That appears to be because of the natural inspection process inherent in open source."
The earlier study praised Linux for the quality of the component that handles the TCP/IP networking that underlies the Internet and many home and corporate networks. That code had a defect rate of 0.1 per 1,000 lines of code and was a more mature section of code.
Reasoning next is studying Tomcat, an Apache module that lets Web servers run Java programs, said Tom Fry, Apache's director of marketing. The company plans to release that study in about two weeks, he said.
Either way I find it interesting the TCP/IP stack is so mature in Linux...think it might be because they have some stolen IP sitting in there? You know the article could have mentioned that too since it went from objective to subjective.
Really, so you're telling me my Grandma can edit the code if she hits a bug?
Yes, your Grandma can edit her copy of the source (since she has her own copy of the source), and play with it on her own system to her heart's content. Then, if her changes produce an actual improvement, and she can demonstrate that convincingly to others, then they can import the same changes. If the changes are proven over time, then maybe Linus will incorporate them into the base release
Before dismissing it out of hand why not do a little research on the issue. If I was an Open Source proponent I'd want to really understand this before I dismissed it. My guess is that they look at all the fixes that have been out within some set time period after the product was released. And by this study Open Source lost, but I'm sure if you applied statistics to the numbers it would show it was really a tie, so I'm not going to quibble that point.
Hey now I see why developers love Linux. They can charge fees to people who will never be able to challenge them. I now see the boondoggle in this Linux scam. Sign me up, I want to rip off people too.
Almost all OS'es (including Windows) derived their original implementation of TCP/IP from code lifted out of BSD. The same is true for many "dedicated" devices like routers. The BSD license explicitly permits this.
Go back to your job with SCO.
Actually I agree with you for the most part on this. But the article praises Linux for doing nothing more than copying someone elses code. So it may not be stolen, but Linux still runs that risk and they do have a good point that Linux has matured extremely quickly since IBM entered the fray. It matured quickly compared to other OSS and Closed source software. Other then subjective reasoning, what facts can you give me that would show me otherwise?
Care to provide any proof of that claim? Or did you just make that up?
| objective evidence that Open Source is just as buggy as Closed
Two equally-accurate headlines here are "Open Source Code as Good as Proprietary" and "Little Quality Difference Found between Open, Closed Source Software." But we didn't get any of those. We got "Open Source Can't Swim." We see this stuff every day on FR. Usually it's "Bush Can't Swim" or "GOP Pollutes River With Old Shoes." In politics it's liberal bias. It's reporters trying to help the Democrats. The only time I've ever seen this kind of thing in a trade rag is when the Publisher (read: Ad Salesman-in-Chief) is leaning on the Editor to "give our friends the benefit of the doubt." |
The interesting part is their choice of versions. In February 2003, Netcraft reported that Apache 2.x was about 1% (yes, one) of Apache deployments. 99% are running Apache 1.x.
Although new features are going into Apache 2.x, Apache 1.x continues to be maintained and is apparently adequate for most servers.
That's not what they said, and neither did I -- I used the word derived. I was simply discrediting your subtle hint that it might be "stolen".
In actuality, the BSD code in question has been found to have several significant bugs in it over the years, and it has continued to evolve. While some of the OSs that used it took a while to be fixed, the open source kernels typically had a patch available within days. Network code was never my specialty, so I don't know if any of it still bears any resemblance to the original.
So it may not be stolen, but Linux still runs that risk and they do have a good point that Linux has matured extremely quickly since IBM entered the fray. It matured quickly compared to other OSS and Closed source software.
IBM has been a contributor, but they are hardly the "reason" that Linux has matured quickly. There are literally thousands of contributors, and the number continues to grow. What has really happened is they reached "critical mass" a few years ago and became a real contender instead of an interesting exercise in world-wide cooperative effort.
Other then subjective reasoning, what facts can you give me that would show me otherwise?
If you want facts, this article doesn't have them. I've used automated code analyzers in the past, and have found they typically have a very high false positive rate. I've used one on another Unix derivative (OSF/Mach), spending a great deal of time cleaning up code and made it easier to maintain/understand, but only found one or two "real" bugs out of about a thousand "detected".
The only facts that would convince me either way would be a side-by-side test: similar applications, similar workloads, similar hardware. Over time, one would get a better idea about reliability. Some of my clients are taking exactly this approach, but it will be a while before the trend becomes clear.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.