Shifting left with telemetry pipelines: The way forward for information tiering at petabyte scale

Shifting left with telemetry pipelines: The way forward for information tiering at petabyte scale


In right this moment’s quickly evolving observability and safety use instances, the idea of “shifting left” has moved past simply software program growth. With the constant and fast rise of information volumes throughout logs, metrics, traces, and occasions, organizations are required to be much more considerate in efforts to show chaos into management with regards to understanding and managing their streaming information units. Groups are striving to be extra proactive within the administration of their mission essential manufacturing programs and wish to realize far earlier detection of potential points. This method emphasizes shifting historically late-stage actions — like seeing, understanding, remodeling, filtering, analyzing, testing, and monitoring — nearer to the start of the info creation cycle. With the expansion of next-generation architectures, cloud-native applied sciences, microservices, and Kubernetes, enterprises are more and more adopting Telemetry Pipelines to allow this shift. A key factor on this motion is the idea of information tiering, a data-optimization technique that performs a essential position in aligning the cost-value ratio for observability and safety groups.

The Shift Left Motion: Chaos to Management 

“Shifting left” originated within the realm of DevOps and software program testing. The thought was easy: discover and repair issues earlier within the course of to cut back threat, enhance high quality, and speed up growth. As organizations have embraced DevOps and steady integration/steady supply (CI/CD) pipelines, the advantages of shifting left have change into more and more clear — much less rework, sooner deployments, and extra strong programs.

Within the context of observability and safety, shifting left means undertaking the evaluation, transformation, and routing of logs, metrics, traces, and occasions very far upstream, extraordinarily early of their utilization lifecycle — a really completely different method compared to the normal “centralize then analyze” methodology. By integrating these processes earlier, groups cannot solely drastically scale back prices for in any other case prohibitive information volumes, however may even detect anomalies, efficiency points, and potential safety threats a lot faster, earlier than they change into main issues in manufacturing. The rise of microservices and Kubernetes architectures has particularly accelerated this want, because the complexity and distributed nature of cloud-native purposes demand extra granular and real-time insights, and every localized information set is distributed when in comparison with the monoliths of the previous.

This results in the rising adoption of Telemetry Pipelines.

What Are Telemetry Pipelines?

Telemetry Pipelines are purpose-built to allow next-generation architectures. They’re designed to offer visibility and to pre-process, analyze, rework, and route observability and safety information from any supply to any vacation spot. These pipelines give organizations the excellent toolbox and set of capabilities to manage and optimize the stream of telemetry information, making certain that the suitable information reaches the suitable downstream vacation spot in the suitable format, to allow all the suitable use instances. They provide a versatile and scalable option to combine a number of observability and safety platforms, instruments, and companies.

For instance, in a Kubernetes atmosphere, the place the ephemeral nature of containers can scale up and down dynamically, logs, metrics, and traces from these dynamic workloads must be processed and saved in real-time. Telemetry Pipelines present the potential to mixture information from varied companies, be granular about what you need to do with that information, and in the end ship it downstream to the suitable finish vacation spot — whether or not that’s a conventional safety platform like Splunk that has a excessive unit price for information, or a extra scalable and value efficient storage location optimized for big datasets long run, like AWS S3.

The Function of Knowledge Tiering

As telemetry information continues to develop at an exponential price, enterprises face the problem of managing prices with out compromising on the insights they want in actual time, or the requirement of information retention for audit, compliance, or forensic safety investigations. That is the place information tiering is available in. Knowledge tiering is a method that segments information into completely different ranges (tiers) primarily based on its worth and use case, enabling organizations to optimize each price and efficiency.

In observability and safety, this implies figuring out high-value information that requires rapid evaluation and making use of much more pre-processing and evaluation to that information, in comparison with lower-value information that may merely be saved extra affordably and accessed later, if crucial. This tiered method sometimes contains:

  1. Prime Tier (Excessive-Worth Knowledge): Crucial telemetry information that’s very important for real-time evaluation and troubleshooting is ingested and saved in high-performance platforms like Splunk or Datadog. This information would possibly embody high-priority logs, metrics, and traces which can be important for rapid motion. Though this may embody loads of information in uncooked codecs, the excessive price nature of those platforms sometimes results in groups routing solely the info that’s actually crucial. 
  2. Center Tier (Reasonable-Worth Knowledge): Knowledge that’s essential however doesn’t meet the bar to ship to a premium, typical centralized system and is as an alternative routed to extra cost-efficient observability platforms with newer architectures like Edge Delta. This would possibly embody a way more complete set of logs, metrics, and traces that provide you with a wider, extra helpful understanding of all the varied issues occurring inside your mission essential programs.
  3. Backside Tier (All Knowledge): As a result of extraordinarily cheap nature of S3 relative to observability and safety platforms, all telemetry information in its entirety might be feasibly saved for long-term pattern evaluation, audit or compliance, or investigation functions in low-cost options like AWS S3. That is sometimes chilly storage that may be accessed on demand, however doesn’t must be actively processed.

This multi-tiered structure allows giant enterprises to get the insights they want from their information whereas additionally managing prices and making certain compliance with information retention insurance policies. It’s essential to remember the fact that the Center Tier sometimes contains all information throughout the Prime Tier and extra, and the identical goes for the Backside Tier (which incorporates all information from larger tiers and extra). As a result of the fee per Tier for the underlying downstream locations can, in lots of instances, be orders of magnitude completely different, there isn’t a lot of a profit from not duplicating all information that you just’re placing into Datadog additionally into your S3 buckets, as an example. It’s a lot simpler and extra helpful to have a full information set in S3 for any later wants.

How Telemetry Pipelines Allow Knowledge Tiering

Telemetry Pipelines function the spine of this tiered information method by giving full management and suppleness in routing information primarily based on predefined, out-of-the-box guidelines and/or enterprise logic particular to the wants of your groups. Right here’s how they facilitate information tiering:

  • Actual-Time Processing: For top-value information that requires rapid motion, Telemetry Pipelines present real-time processing and routing, making certain that essential logs, metrics, or safety alerts are delivered to the suitable instrument immediately. As a result of Telemetry Pipelines have an agent element, numerous this processing can occur domestically in a particularly compute, reminiscence, and disk environment friendly method.
  • Filtering and Transformation: Not all telemetry information is created equal, and groups have very completely different wants for a way they could use this information. Telemetry Pipelines allow complete filtering and transformation of any log, metric, hint, or occasion, making certain that solely essentially the most essential data is shipped to high-cost platforms, whereas the complete dataset (together with much less essential information) can then be routed to extra cost-efficient storage.
  • Knowledge Enrichment and Routing: Telemetry Pipelines can ingest information from all kinds of sources — Kubernetes clusters, cloud infrastructure, CI/CD pipelines, third-party APIs, and so on. — after which apply varied enrichments to that information earlier than it’s then routed to the suitable downstream platform.
  • Dynamic Scaling: As enterprises scale their Kubernetes clusters and enhance their use of cloud companies, the amount of telemetry information grows considerably. As a consequence of their aligned structure, Telemetry Pipelines additionally dynamically scale to deal with this growing load with out affecting efficiency or information integrity.
The Advantages for Observability and Safety Groups

By adopting Telemetry Pipelines and information tiering, observability and safety groups can profit in a number of methods:

  • Value Effectivity: Enterprises can considerably scale back prices by routing information to essentially the most applicable tier primarily based on its worth, avoiding the pointless expense of storing low-value information in high-performance platforms.
  • Quicker Troubleshooting: Not solely can there be some monitoring and anomaly detection throughout the Telemetry Pipelines themselves, however essential telemetry information can also be processed extraordinarily rapidly and routed to high-performance platforms for real-time evaluation, enabling groups to detect and resolve points with a lot better pace.
  • Enhanced Safety: Knowledge enrichments from lookup tables, pre-built packs that apply to numerous recognized third-party applied sciences, and extra scalable long-term retention of bigger datasets all allow safety groups to have higher skill to seek out and determine IOCs inside all logs and telemetry information, bettering their skill to detect threats early and reply to incidents sooner.
  • Scalability: As enterprises develop and their telemetry wants develop, Telemetry Pipelines can naturally scale with them, making certain that they will deal with growing information volumes with out sacrificing efficiency.
All of it begins with Pipelines!

Telemetry Pipelines are the core basis to sustainably managing the chaos of telemetry — and they’re essential in any try and wrangle rising volumes of logs, metrics, traces, and occasions. As giant enterprises proceed to shift left and undertake extra proactive approaches to observability and safety, we see that Telemetry Pipelines and information tiering have gotten important on this transformation. By utilizing a tiered information administration technique, organizations can optimize prices, enhance operational effectivity, and improve their skill to detect and resolve points earlier within the life cycle. One further key benefit that we didn’t give attention to on this article, however is essential to name out in any dialogue on trendy Telemetry Pipelines, is their full end-to-end help for Open Telemetry (OTel), which is more and more turning into the business customary for telemetry information assortment and instrumentation. With OTel help built-in, these pipelines seamlessly combine with numerous environments, enabling observability and safety groups to gather, course of, and route telemetry information from any supply with ease. This complete compatibility, mixed with the pliability of information tiering, permits enterprises to realize unified, scalable, and cost-efficient observability and safety that’s designed to scale to tomorrow and past.


To study extra about Kubernetes and the cloud native ecosystem, be a part of us at KubeCon + CloudNativeCon North America, in Salt Lake Metropolis, Utah, on November 12-15, 2024.

Leave a Reply

Your email address will not be published. Required fields are marked *