This content was published by Andrew Tomazos and written by several hundred members of the former Internet Knowledge Base project.

Robustness and Open Source

If in the end it might be true that open source projects tend to reach a more "stable" and "robust" status, I shall point out that, in the last years, the open source paradigm has more and more been of “releasing one faulty version after the other ‘till it finally works”.

After all, not having such a commercial pressure behind the project, reduces the level of attention on the quality of the product. Yes... they will end up fixing more and more bugs than a commercial product, but what about all those users that had to work with bugged software for a long time and had no one to complain with? Let’s not always quote Apache as an example of open source software and Office as it’s counterpart. Apache is no doubt an extreme effort, but are we sure all open source stuff will end up like that?

Not only... The incredible plethora of alpha, beta, stable, unstable, branched and whatever versions we are used to see in open source projects are virtually hidden to our eyes in commercial products. A commercial product tends to model itself on the needs of it’s customers. It sacrifices unneeded features (and unneeded versions) and improves those in great demand (specialization ?!?). Open source often lacks this ability to focus on the mainstream demand and ends up implementing millions of things that will only be needed by a few users.

This to point out that there is also a different design goal behind the development rather than simply a different way of doing it. Hybrid approaches like the Darwin core mixed with commercial layers could help the open source community by driving their “open” efforts after a “commercial” goal.

That said, it is true that a lot of commercial developers these days, tend to release bugged software just to meet imposed marketing deadlines, and thus can’t stand a comparison with their open source brothers.

Back to Index