We have heard about all the fantastic analysis capabilities that other Enterprise Architecture and IT Portfolio management products have built-in. One of the systems provides 1.500 standard reports out-of-the box. If this is not enough you can even analyse all available data with a multidimensional-data cube.This is amazing, because it makes me feel that whatever I need I will find a report. That must be a very well-designed product. They must have put a lot of effort into details.
BUT! Do I really need so many details? Who has entered this massive amount of details? Are all information still valid? And, how do I find the right report out of the 1.500?
Let's answer these questions one by one starting with the last one. Please keep in mind that our statements are made from a management perspective and not from an Enterprise Architect expert point of view.
How do I find the right report out of the 1.500?
If you know what kind of information you need it should be possible to reduce the number of meaningful reports in a first step - let's assume down to 25. What now? To be honest we don't really know. We would ask an expert to prepare an expedient Powerpoint presentation supporting our request. To offer a limited number of reports, assuring that you will always find the information you need for all your management decisions, seems to be important from a user perspective.
Do I really need so many details?
This answer is easy.
In 99.9% of all cases where you would need data out of the system you need only a small fraction of available data. To handle these data these will be even aggregated. We cannot remember any management situation where many details have been necessary to take the right decision. That being said, a focus on management reports and a manageable and meaningful number of data could be more beneficial for the user.
Who has entered this massive amount of details? Are all information still valid?
Analysts found out that few companies have a complete and accurate view of their IT landscape. The average company's data is only 55 % accurate. (Source: Nucleus Research Note)
There must be an underlying root cause to these numbers. Our assumption is that too many people are necessary to keep these data up-to-date - but with no benefit from entering the data. In addition to this the more granular data are the more difficult it is to keep these up-to-date. And, the more data you have to maintain the more often you have to update data.
This is a vicious circle and the reason why it needs a lot of IT processes and strict IT governance if you implement such a system.
We ask to reduce the number of data and the granularity to the minimum amount needed.
What can I do about it?
You should be very sensitive if you implement systems where you create big data rather than information. First you invest time and effort to gather all these data. Secondly, as a consequence of big data you will spend a lot of time to sort out the right data to your problem. Please be cautious if a provider sells you all the possibilities you have due to the high number of details.
What do you think? We would love to hear from you.