(Ed: the post image is archive footage of people rolling a log. Get it?)

It’s all fair in love, war, monitoring and log management as Datadog announced that its log management solution will now be available generally to customers. What that means is that, from today, all Datadog customers will be able to correlate between their log data and the underlying infrastructure and application states that exist. All of which should lead to quicker diagnosis of faults and a reduced time to resolution.

The key thing here for customers is that all existing Datadog solutions will now have the added sprinkling of log data fairy dust, meaning that, as well as unicorns and rainbows, those customers who formerly used a standalone log management offering (cough, Splunk) should have most of their use cases ticked off by the single platform.

As Datadog sees it (and I tend to agree) there are three pillars of observability within cloud applications: infrastructure metrics, application traces, and event logs. As I have been banging on about for quite some time now, I see a real convergence in the space – the application monitoring vendors are all moving on to include infrastructure monitoring while those who traditionally had an infrastructure-centric view, are rapidly backfilling application monitoring into the mix. Both of these distinct groups are also seeing the log capture and analysis opportunity as a big one.

How do they price?

Datadog’s pricing for ingesting and managing logs is based on a value-based model. What this means is that there is a low upfront cost for total logs ingested, encouraging customers to slurp up as much data as possible. Customers then take advantage of the filtering capabilities to decide which logs they wish to index, and which they wish to archive. The upshot is that the pricing model should work well across the continuum of organizations from the smallest, too much larger ones.

Fabien Jallot, the head of DevOps at 24 Sèvres would seem to be a real believer explaining that:

We have various Amazon Web Services (AWS) solutions, containers, servers and applications to monitor. Infrastructure metrics and logs often work together to tell the full story of what’s happening within an application, so we find ourselves continuously jumping from Datadog to another third party log management solution to connect the dots – an integrated solution native to Datadog is a tremendous gain of productivity.


I’ve always thought that a combined solution offering application and infrastructure monitoring alongside log analytics was the best approach. It is an approach that pretty much everyone, no matter which side of the problem they originated from, seems to be coming to.

Time will tell how Datadog fares in an increasingly competitive environment.

Ben Kepes

Ben Kepes is a technology evangelist, an investor, a commentator and a business adviser. Ben covers the convergence of technology, mobile, ubiquity and agility, all enabled by the Cloud. His areas of interest extend to enterprise software, software integration, financial/accounting software, platforms and infrastructure as well as articulating technology simply for everyday users.

Leave a Reply