Data has become the fuel that drives enterprise operations. We are entering the age of hyper-personalization and IoT technology is changing business models. Managing all those data and the cloud data infrastructure is rapidly becoming as critical as managing the IT infrastructure itself. Time to think about a new way of organizing data operations.
Here at Triple, we do not only build cloud infrastructures for multinational companies, but we also monitor and maintain them. Our clients rely on their cloud infrastructures to run mission-critical applications, so our TechOps heroes monitor them 24/7. In addition, we continuously collect performance data. Because you need real-time insights into cloud infrastructure performance.
The same goes for security. Cloud infrastructure security has become such a complex issue, that even the largest enterprises need centralized units of highly skilled specialists to keep their data and systems safe. From this Security Operations Center, SOC for short, all systems are monitored. The SOC is equipped with the tools and the mandate to act on threats proactively.
Towards a Data Operations Center?
As data capabilities have become more and more mission-critical, it would be a great idea to do the same thing for data. You could call it a Data Operations Center, or DOC. Let’s think a bit about what such a DOC would have to look like...
First, you would need the capability to continuously monitor your data environments, much like the way we monitor environments for security issues, performance and technical issues. In data operations, there are many moving parts. The collecting, ingesting, transforming, serving and analyzing of data is a complex process involving many different applications and storage locations. At any point in the process, a stored procedure may get stuck in an infinite loop, or an integration service like Azure Data Factory may be using more processing power than you want. Another frequent failure we see is databases reaching their maximum capacity and clogging up the process.
Being able to automatically scan for errors and failures like this and seeing them all reported on a central dashboard would be a powerful capability.
Ensuring data quality
In the modern enterprise, data quality is essential. Decision-making is becoming more and more data-based and, in cases like personalization or yield management, automated. So, the adage is ‘garbage in, garbage out’. In other words: bad data means bad decisions. And you can't afford to make bad decisions in competitive markets. This is why you need to monitor data quality and make sure all data is up to date, consistent, accurate and complete. Fortunately, much of this can be automated.
Managing cost
Another aspect of this is cost management. Cloud infrastructure cost control can be a tough issue to deal with. Get it wrong, forget to turn off an unused database for example, and your monthly Azure bill could be higher than you anticipated. Having real-time insights into your use of cloud resources will give you the much-craved cost control and confidence you need to overcome the last pockets of resistance to managing data in the cloud and give everyone the confidence that storing and processing data in the cloud is safe, effective and efficient.
The tools you need
If you want to set up your own DOC, you may find that your cloud environment lacks the right tools. Your Cloud Data Competence Center, or Data Operations Center, will be the home of all data and knowledge on the performance of your cloud data infrastructure. The central place that holds both the expertise and the tools to make sure data flows through your organization and powers good decisions. The specialists that man it can advise all other departments on data quality, infrastructure performance and cost control while at the same time enforcing policies and best practices. But specialists need specialist tooling. Working with Azure Monitor, we have seen some serious shortcomings. In Azure Monitor, data is only available from the previous 14 days, up to a maximum of three months, depending on the kind of data. This makes historical analysis, benchmarking and the spotting of long-term trends difficult. It is also hard to trace exactly when failures and errors occurred. We have built custom tools that do give us the data we need to properly diagnose data problems. Furthermore, Azure Monitor does not report capacity problems. When a database reaches maximum capacity, data ingestion just stops without notice. For monitoring this, we were forced to build our own functionality as well.
It’s about control
There is still quite a bit of hesitance out there in adopting cloud data storage and processing. Data people are control freaks. And rightly so. When working with data, you need to be able to control what gets processed when and in what way and you need to be sure the data your front-ends are seeing are correct. Plus, there is enormous pressure to ensure data security and privacy and to control costs. Centralizing data cloud infrastructure management into a competence center or Data Operations Center, the same way most organizations do with security, is a good way of taking control of all aspects of data infrastructure.
Do you want to know how we help clients with their data infrastructure? Take a look at our pages on data and analytics and cloud managed services.