Industry analysts produce their studies and fancy charts for decades. There is no doubt that some of them are quite influential. But have you ever wondered how are the results of these studies produced? Do the results actually reflect reality? How are the positions of individual products in the charts determined? Are the methodologies based on subjective assessments that are easy to influence? Or are there objective data behind it?
Answers to these questions are not easy. Methodologies of industry analysts seem to be something like trade secrets. They are not public. They are not open to broad review and scrutiny. Therefore there is no way how to check the methodology by looking “inside” and analyzing the algorithm. So, let’s have a look from the “outside”. Let’s compare the results of proprietary analyst studies with a similar study that is completely open.
But it is tricky to make a completely open study of commercial products. Some product licenses explicitly prohibit evaluation. Other products are almost incomprehensible. Therefore we have decided to analyze open source products instead. These are completely open and there are no obstacles to evaluate them in depth. Open source is mainstream for many years and numerous open source products are market leaders. Therefore this can provide a reasonably good representative sample.
As our domain of expertise is Identity Management (IDM) we have conducted a study of IDM products. And here are the results of IDM product feature comparison in a fancy chart:
We have taken a great care to make a very detailed analysis of each product. We have a very high confidence in these data. The study is completely open and therefore anyone can repeat it and check the results. But these are still data based on feature assessment done by several human beings. Even though we have tried hard to be as objective as possible this can still be slightly biased and inaccurate …
Let’s take it one level higher. Let’s base the second part of the study on automated analysis of the project source code. These are open source products. All the dirty secrets of software vendors are there in the code for anyone to see. Therefore we have analyzed the structure of source code and also the development history of each product. These data are not based on glossy marketing brochures. These are hard data taken from the actual code of the actual system that the customers are going to deploy. We have compiled the results into a familiar graphical form:
Now, please take the latest study of your favorite industry analyst and compare the results. What do you see? I leave the conclusion of this post to the reader. However I cannot resist the temptation to comment that the results are pretty obvious.
But what to do about this? Is our study correct? We believe that it is. And you can check that yourself. Or have we done any mistakes and the truth is closer to what the analysts say? We simply do not know because the analysts keep their methodologies secret. Therefore I have a challenge for all the analysts: open up your methodologies. Publish your algorithms, data and your detailed explanation of the assessment. Exactly as we did. Be transparent. Only then we can see who is right and who is wrong.