This put up was written by Eunice Aguilar and Francisco Rodera from REA Group.
Enterprises that must share and entry massive quantities of information throughout a number of domains and companies must construct a cloud infrastructure that scales as want modifications. REA Group, a digital enterprise that makes a speciality of actual property property, solved this drawback utilizing Amazon Managed Streaming for Apache Kafka (Amazon MSK) and a knowledge streaming platform known as Hydro.
REA Group’s crew of greater than 3,000 individuals is guided by our function: to alter the way in which the world experiences property. We assist individuals with all points of their property expertise—not simply shopping for, promoting, and renting—by way of the richest content material, knowledge and insights, valuation estimates, and residential financing options. We ship unparalleled worth to our clients, Australia’s actual property brokers, by offering entry to the biggest and most engaged viewers of property seekers.
To attain this, the completely different technical merchandise throughout the firm usually want to maneuver knowledge throughout domains and companies effectively and reliably.
Inside the Information Platform crew, now we have constructed a knowledge streaming platform known as Hydro to supply this functionality throughout the entire group. Hydro is powered by Amazon MSK and different instruments with which groups can transfer, rework, and publish knowledge at low latency utilizing event-driven architectures. Any such construction is foundational at REA for constructing microservices and well timed knowledge processing for real-time and batch use instances like time-sensitive outbound messaging, personalization, and machine studying (ML).
On this put up, we share our strategy to MSK cluster capability planning.
The issue
Hydro manages a large-scale Amazon MSK infrastructure by offering configuration abstractions, permitting customers to concentrate on delivering worth to REA with out the cognitive overhead of infrastructure administration. As using Hydro grows inside REA, it’s essential to carry out capability planning to fulfill person calls for whereas sustaining optimum efficiency and cost-efficiency.
Hydro makes use of provisioned MSK clusters in growth and manufacturing environments. In every setting, Hydro manages a single MSK cluster that hosts a number of tenants with differing workload necessities. Correct capability planning makes certain the clusters can deal with excessive visitors and supply all customers with the specified stage of service.
Actual-time streaming is a comparatively new expertise at REA. Many customers aren’t but accustomed to Apache Kafka, and precisely assessing their workload necessities will be difficult. Because the custodians of the Hydro platform, it’s our duty to discover a solution to carry out capability planning to proactively assess the influence of the person workloads on our clusters.
Targets
Capability planning includes figuring out the suitable dimension and configuration of the cluster primarily based on present and projected workloads, in addition to contemplating components similar to knowledge replication, community bandwidth, and storage capability.
With out correct capability planning, Hydro clusters can turn out to be overwhelmed by excessive visitors and fail to supply customers with the specified stage of service. Due to this fact, it’s essential to us to speculate time and sources into capability planning to ensure Hydro clusters can ship the efficiency and availability that trendy functions require.
The capability planning strategy we comply with for Hydro covers three important areas:
- The fashions used for the calculation of present and estimated future capability wants, together with the attributes used as variables in them
- The fashions used to evaluate the approximate anticipated capability required for a brand new Hydro workload becoming a member of the platform
- The tooling obtainable to operators and custodians to evaluate the historic and present capability consumption of the platform and, primarily based on them, the obtainable headroom
The next diagram reveals the interplay of capability utilization and the precalculated most utilization.
Though we don’t have this functionality but, the aim is to take this strategy one step additional sooner or later and predict the approximate useful resource depletion time, as proven within the following diagram.
To verify our digital operations are resilient and environment friendly, we should keep a complete observability of our present capability utilization. This detailed oversight permits us not solely to know the efficiency limits of our present infrastructure, but in addition to determine potential bottlenecks earlier than they influence our companies and customers.
By proactively setting and monitoring well-understood thresholds, we are able to obtain well timed alerts and take mandatory scaling actions. This strategy makes certain our infrastructure can meet demand spikes with out compromising on efficiency, finally supporting a seamless person expertise and sustaining the integrity of our system.
Resolution overview
The MSK clusters in Hydro are configured with a PER_TOPIC_PER_BROKER
stage of monitoring, which supplies metrics on the dealer and subject ranges. These metrics assist us decide the attributes of the cluster utilization successfully.
Nevertheless, it wouldn’t be sensible to show an extreme variety of metrics on our monitoring dashboards as a result of that might result in much less readability and slower insights on the cluster. It’s extra invaluable to decide on essentially the most related metrics for capability planning reasonably than displaying quite a few metrics.
Cluster utilization attributes
Based mostly on the Amazon MSK greatest practices tips, now we have recognized a number of key attributes to evaluate the well being of the MSK cluster. These attributes embrace the next:
- In/out throughput
- CPU utilization
- Disk house utilization
- Reminiscence utilization
- Producer and client latency
- Producer and client throttling
For extra data on right-sizing your clusters, see Finest practices for right-sizing your Apache Kafka clusters to optimize efficiency and price, Finest practices for Normal brokers, Monitor CPU utilization, Monitor disk house, and Monitor Apache Kafka reminiscence.
The next desk incorporates the detailed record of all of the attributes we use for MSK cluster capability planning in Hydro.
Attribute Title | Attribute Kind | Models | Feedback |
---|---|---|---|
Bytes in | Throughput | Bytes per second | Depends on the combination Amazon EC2 community, Amazon EBS community, and Amazon EBS storage throughput |
Bytes out | Throughput | Bytes per second | Depends on the combination Amazon EC2 community, Amazon EBS community, and Amazon EBS storage throughput |
Shopper latency | Latency | Milliseconds | Excessive or unacceptable latency values normally point out person expertise degradation earlier than reaching precise useful resource (for instance, CPU and reminiscence) depletion |
CPU utilization | Capability limits | % CPU person + CPU system | Ought to keep underneath 60% |
Disk house utilization | Persistent storage | Bytes | Ought to keep underneath 85% |
Reminiscence utilization | Capability limits | % Reminiscence in use | Ought to keep underneath 60% |
Producer latency | Latency | Milliseconds | Excessive or unacceptable sustained latency values normally point out person expertise degradation earlier than reaching precise capability limits or precise useful resource (for instance, CPU or reminiscence) depletion |
Throttling | Capability limits | Milliseconds, bytes, or messages | Excessive or unacceptable sustained throttling values point out capability limits are being reached earlier than precise useful resource (for instance, CPU or reminiscence) depletion |
By monitoring these attributes, we are able to shortly consider the efficiency of the clusters as we add extra workloads to the platform. We then match these attributes to the related MSK metrics obtainable.
Cluster capability limits
Throughout the preliminary capability planning, our MSK clusters weren’t receiving sufficient visitors to supply us with a transparent thought of their capability limits. To handle this, we used the AWS efficiency testing framework for Apache Kafka to guage the theoretical efficiency limits. We carried out efficiency and capability assessments on the check MSK clusters that had the identical cluster configurations as our growth and manufacturing clusters. We obtained a extra complete understanding of the cluster’s efficiency by conducting these varied check situations. The next determine reveals an instance of a check cluster’s efficiency metrics.
To carry out the assessments inside a selected timeframe and price range, we centered on the check situations that might effectively measure the cluster’s capability. As an example, we carried out assessments that concerned sending high-throughput visitors to the cluster and creating matters with many partitions.
After each check, we collected the metrics of the check cluster and extracted the utmost values of the important thing cluster utilization attributes. We then consolidated the outcomes and decided essentially the most applicable limits of every attribute. The next screenshot reveals an instance of the exported check cluster’s efficiency metrics.
Capability monitoring dashboards
As a part of our platform administration course of, we conduct month-to-month operational opinions to keep up optimum efficiency. This includes analyzing an automatic operational report that covers all of the methods on the platform. Throughout the overview, we consider the service stage targets (SLOs) primarily based on choose service stage indicators (SLIs) and assess the monitoring alerts triggered from the earlier month. By doing so, we are able to determine any points and take corrective actions.
To help us in conducting the operational opinions and to supply us with an summary of the cluster’s utilization, we developed a capability monitoring dashboard, as proven within the following screenshot, for every setting. We constructed the dashboard as infrastructure as code (IaC) utilizing the AWS Cloud Improvement Package (AWS CDK). The dashboard is generated and managed robotically as a element of the platform infrastructure, together with the MSK cluster.
By defining the utmost capability limits of the MSK cluster in a configuration file, the boundaries are robotically loaded into the capability dashboard as annotations within the Amazon CloudWatch graph widgets. The capability limits annotations are clearly seen and supply us with a view of the cluster’s capability headroom primarily based on utilization.
We decided the capability limits for throughput, latency, and throttling by way of the efficiency testing. Capability limits of the opposite metrics, similar to CPU, disk house, and reminiscence, are primarily based on the Amazon MSK greatest practices tips.
Throughout the operational opinions, we proactively assess the capability monitoring dashboards to find out if extra capability must be added to the cluster. This strategy permits us to determine and deal with potential efficiency points earlier than they’ve a big influence on person workloads. It’s a preventative measure reasonably than a reactive response to a efficiency degradation.
Preemptive CloudWatch alarms
We’ve got applied preemptive CloudWatch alarms along with the capability monitoring dashboards. These alarms are configured to alert us earlier than a selected capability metric reaches its threshold, notifying us when the sustained worth reaches 80% of the capability restrict. This technique of monitoring allows us to take instant motion as a substitute of ready for our month-to-month overview cadence.
Worth added by our capability planning strategy
As operators of the Hydro platform, our strategy to capability planning has offered a constant solution to assess how far we’re from the theoretical capability limits of all our clusters, no matter their configuration. Our capability monitoring dashboards are a key observability instrument that we overview regularly; they’re additionally helpful whereas troubleshooting efficiency points. They assist us shortly inform if capability constraints might be a possible root explanation for any ongoing points. Which means that we are able to use our present capability planning strategy and tooling each proactively or reactively, relying on the state of affairs and wish.
One other good thing about this strategy is that we calculate the theoretical most utilization values {that a} given cluster with a selected configuration can face up to from a separate cluster with out impacting any precise customers of the platform. We spin up short-lived MSK clusters by way of our AWS CDK primarily based automation and carry out capability assessments on them. We do that very often to evaluate the influence, if any, that modifications made to the cluster’s configurations have on the identified capability limits. Based on our present suggestions loop, if these newly calculated limits change from the beforehand identified ones, they’re used to robotically replace our capability dashboards and alarms in CloudWatch.
Future evolution
Hydro is a platform that’s consistently enhancing with the introduction of latest options. Certainly one of these options consists of the power to conveniently create Kafka shopper functions. To satisfy the growing demand, it’s important to remain forward of capability planning. Though the strategy mentioned right here has served us nicely to this point, it’s not at all the ultimate stage , and there are capabilities that we have to prolong and areas we have to enhance on.
Multi-cluster structure
To help essential workloads, we’re contemplating utilizing a multi-cluster structure utilizing Amazon MSK, which might additionally have an effect on our capability planning. Sooner or later, we plan to profile workloads primarily based on metadata, cross-check them with capability metrics, and place them within the applicable MSK cluster. Along with the present provisioned MSK clusters, we’ll consider how the Amazon MSK Serverless cluster kind can complement our platform structure.
Utilization traits
We’ve got added CloudWatch anomaly detection graphs to our capability monitoring dashboards to trace any uncommon traits. Nevertheless, as a result of the CloudWatch anomaly detection algorithm solely evaluates as much as 2 weeks of metric knowledge, we’ll reassess its usefulness as we onboard extra workloads. Other than figuring out utilization traits, we’ll discover choices to implement an algorithm with predictive capabilities to detect when MSK cluster sources degrade and deplete.
Conclusion
Preliminary capability planning lays a stable basis for future enhancements and supplies a secure onboarding course of for workloads. To attain optimum efficiency of our platform, we should guarantee that our capability planning technique evolves in step with the platform’s development. In consequence, we keep a detailed collaboration with AWS to repeatedly develop extra options that meet our enterprise wants and are in sync with the Amazon MSK roadmap. This makes certain we keep forward of the curve and might ship the absolute best expertise to our customers.
We advocate all Amazon MSK customers not miss out on maximizing their cluster’s potential and to start out planning their capability. Implementing the methods listed on this put up is a superb first step and can result in smoother operations and important financial savings in the long term.
In regards to the Authors
Eunice Aguilar is a Employees Information Engineer at REA. She has labored in software program engineering in varied industries all through the years and lately for property knowledge. She’s additionally an advocate for ladies focused on transitioning into tech, together with the well-versed who she takes inspiration from.
Francisco Rodera is a Employees Methods Engineer at REA. He has intensive expertise constructing and working large-scale distributed methods. His pursuits are automation, observability, and making use of SRE practices to business-critical companies and platforms.
Khizer Naeem is a Technical Account Supervisor at AWS. He focuses on Environment friendly Compute and has a deep ardour for Linux and open-source applied sciences, which he leverages to assist enterprise clients modernize and optimize their cloud workloads.