13 June 2019
Today’s digital world is a challenging one.
In the world of IT, there is a certain expectation that a stable and seamless network monitoring experience will be provided.
To do this, IT departments need to be equipped with the right tools.
The industry must embrace and prepare for change if it is to adapt for the future.
In order to prepare for the future, we must first understand the changes that are taking place in IT operations now.
The first being virtualisation. We started with single bare metal servers running a few applications, to a single server running many virtualised “servers”.
The benefit, being less investment for IT operators due to the ability to consolidate virtual machines onto fewer physical servers.
The second shift has been the adoption of containers.
Instead of virtualising the hardware and running individual operating systems, containers run on top of the operating system of a host or node.
This allows us to run multiple workloads on top of a single operating system.
These nodes can be physical or virtual, where one “server” is able to run many containers, allowing ability to balance workload more efficiently.
The most recent shift is Functions as a Service (FaaS).
Also known as serverless, FaaS eliminates the need for organisations to maintain their own servers by outsourcing infrastructure to a third party.
With the abstraction of computing, the hosting and operating infrastructure has become its own commodity.
This trend towards abstraction indicates that within the next few years we won’t care about infrastructure anymore.
The less applications we run on bare metal, the less we should have to care about it.
An operator running an application on a public cloud has no need to care about physical infrastructure behind the cloud.
However, this does mean that it won’t be possible for them to monitor said infrastructure for potential network issues.
The question then arises, what does the future of monitoring look like?
To answer this question, we must focus on the application itself, rather than the workloads running on the infrastructure.
Observability of metrics, logs and traces directly pulled or pushed from our workload or application is a good way to achieve this.
With this high cardinal data, we are able to infer the current state of a system from its external outputs.
Once the part of the application causing problems has been identified, it is possible to look at these logs to check for write issues on a specific node of the database cluster.
Ultimately, the future of monitoring is changing and we are going to rely on the infrastructure less than ever before.
But, to understand the future, we must embrace change.