Amazon EMR Serverless lets you run open supply huge information frameworks similar to Apache Spark and Apache Hive with out managing clusters and servers. With EMR Serverless, you may run analytics workloads at any scale with computerized scaling that resizes sources in seconds to satisfy altering information volumes and processing necessities.
We now have launched job employee metrics in Amazon CloudWatch for EMR Serverless. This function lets you monitor vCPUs, reminiscence, ephemeral storage, and disk I/O allocation and utilization metrics at an combination employee stage in your Spark and Hive jobs.
This put up is a part of a collection about EMR Serverless observability. On this put up, we focus on the best way to use these CloudWatch metrics to observe EMR Serverless employees in close to actual time.
CloudWatch metrics for EMR Serverless
On the per-Spark job stage, EMR Serverless emits the next new metrics to CloudWatch for each driver and executors. These metrics present granular insights into job efficiency, bottlenecks, and useful resource utilization.
WorkerCpuAllocated | The whole numbers of vCPU cores allotted for employees in a job run |
WorkerCpuUsed | The whole numbers of vCPU cores utilized by employees in a job run |
WorkerMemoryAllocated | The whole reminiscence in GB allotted for employees in a job run |
WorkerMemoryUsed | The whole reminiscence in GB utilized by employees in a job run |
WorkerEphemeralStorageAllocated | The variety of bytes of ephemeral storage allotted for employees in a job run |
WorkerEphemeralStorageUsed | The variety of bytes of ephemeral storage utilized by employees in a job run |
WorkerStorageReadBytes | The variety of bytes learn from storage by employees in a job run |
WorkerStorageWriteBytes | The variety of bytes written to storage from employees in a job run |
The next are the advantages of monitoring your EMR Serverless jobs with CloudWatch:
- Optimize useful resource utilization – You’ll be able to achieve insights into useful resource utilization patterns and optimize your EMR Serverless configurations for higher effectivity and value financial savings. For instance, underutilization of vCPUs or reminiscence can reveal useful resource wastage, permitting you to optimize employee sizes to attain potential price financial savings.
- Diagnose frequent errors – You’ll be able to determine root causes and mitigation for frequent errors with out log diving. For instance, you may monitor the utilization of ephemeral storage and mitigate disk bottlenecks by preemptively allocating extra storage per employee.
- Achieve close to real-time insights – CloudWatch gives close to real-time monitoring capabilities, permitting you to trace the efficiency of your EMR Serverless jobs as and when they’re working, for fast detection of any anomalies or efficiency points.
- Configure alerts and notifications – CloudWatch lets you arrange alarms utilizing Amazon Easy Notification Service (Amazon SNS) based mostly on predefined thresholds, permitting you to obtain notifications by way of e-mail or textual content message when particular metrics attain important ranges.
- Conduct historic evaluation – CloudWatch shops historic information, permitting you to research traits over time, determine patterns, and make knowledgeable choices for capability planning and workload optimization.
Answer overview
To additional improve this observability expertise, now we have created an answer that gathers all these metrics on a single CloudWatch dashboard for an EMR Serverless software. It is advisable to launch one AWS CloudFormation template per EMR Serverless software. You’ll be able to monitor all the roles submitted to a single EMR Serverless software utilizing the identical CloudWatch dashboard. To study extra about this dashboard and deploy this resolution into your individual account, seek advice from the EMR Serverless CloudWatch Dashboard GitHub repository.
Within the following sections, we stroll you thru how you need to use this dashboard to carry out the next actions:
- Optimize your useful resource utilization to avoid wasting prices with out impacting job efficiency
- Diagnose failures because of frequent errors with out the necessity for log diving and resolve these errors optimally
Stipulations
To run the pattern jobs offered on this put up, it’s essential to create an EMR Serverless software with default settings utilizing the AWS Administration Console or AWS Command Line Interface (AWS CLI), after which launch the CloudFormation template from the GitHub repo with the EMR Serverless software ID offered because the enter to the template.
It is advisable to submit all the roles on this put up to the identical EMR Serverless software. If you wish to monitor a distinct software, you may deploy this template in your personal EMR Serverless software ID.
Optimize useful resource utilization
When working Spark jobs, you usually begin with the default configurations. It may be difficult to optimize your workload with none visibility into precise useful resource utilization. A few of the commonest configurations that we’ve seen clients modify are spark.driver.cores, spark.driver.reminiscence, spark.executor.cores, and spark.executors.reminiscence.
As an instance how the newly added CloudWatch dashboard worker-level metrics can assist you fine-tune your job configurations for higher price-performance and enhanced useful resource utilization, let’s run the next Spark job, which makes use of the NOAA Built-in Floor Database (ISD) dataset to run some transformations and aggregations.
Use the next command to run this job on EMR Serverless. Present your Amazon Easy Storage Service (Amazon S3) bucket and EMR Serverless software ID for which you launched the CloudFormation template. Be sure that to make use of the identical software ID to submit all of the pattern jobs on this put up. Moreover, present an AWS Identification and Entry Administration (IAM) runtime function.
Now let’s test the executor vCPUs and reminiscence from the CloudWatch dashboard.
This job was submitted with default EMR Serverless Spark configurations. From the Executor CPU Allotted metric within the previous screenshot, the job was allotted 396 vCPUs in complete (99 executors * 4 vCPUs per executor). Nonetheless, the job solely used a most of 110 vCPUs based mostly on Executor CPU Used. This means oversubscription of vCPU sources. Equally, the job was allotted 1,584 GB reminiscence in complete based mostly on Executor Reminiscence Allotted. Nonetheless, from the Executor Reminiscence Used metric, we see that the job solely used 176 GB of reminiscence in the course of the job, indicating reminiscence oversubscription.
Now let’s rerun this job with the next adjusted configurations.
Authentic Job (Default Configuration) | Rerun Job (Adjusted Configuration) | |
spark.executor.reminiscence | 14 GB | 3 GB |
spark.executor.cores | 4 | 2 |
spark.dynamicAllocation.maxExecutors | 99 | 30 |
Whole Useful resource Utilization |
6.521 vCPU-hours 26.084 memoryGB-hours 32.606 storageGB-hours |
1.739 vCPU-hours 3.688 memoryGB-hours 17.394 storageGB-hours |
Billable Useful resource Utilization |
7.046 vCPU-hours 28.182 memoryGB-hours 0 storageGB-hours |
1.739 vCPU-hours 3.688 memoryGB-hours 0 storageGB-hours |
We use the next code:
Let’s test the executor metrics from the CloudWatch dashboard once more for this job run.
Within the second job, we see decrease allocation of each vCPUs (396 vs. 60) and reminiscence (1,584 GB vs. 120 GB) as anticipated, leading to higher utilization of sources. The unique job ran for 4 minutes, 41 seconds. The second job took 4 minutes, 54 seconds. This reconfiguration has resulted in 79% decrease price financial savings with out affecting the job efficiency.
You should utilize these metrics to additional optimize your job by growing or reducing the variety of employees or the allotted sources.
Diagnose and resolve job failures
Utilizing the CloudWatch dashboard, you may diagnose job failures because of points associated to CPU, reminiscence, and storage similar to out of reminiscence or no area left on the machine. This lets you determine and resolve frequent errors shortly with out having to test the logs or navigate by way of Spark Historical past Server. Moreover, as a result of you may test the useful resource utilization from the dashboard, you may fine-tune the configurations by growing the required sources solely as a lot as wanted as a substitute of oversubscribing to the sources, which additional saves prices.
Driver errors
As an instance this use case, let’s run the next Spark job, which creates a big Spark information body with a couple of million rows. Sometimes, this operation is completed by the Spark driver. Whereas submitting the job, we additionally configure spark.rpc.message.maxSize
, as a result of it’s required for job serialization of information frames with a lot of columns.
After a couple of minutes, the job failed with the error message “Encountered errors when releasing containers,” as seen within the Job particulars part.
When encountering non-descriptive error messages, it turns into essential to analyze additional by analyzing the driving force and executor logs to troubleshoot additional. However earlier than additional log diving, let’s first test the CloudWatch dashboard, particularly the driving force metrics, as a result of releasing containers is usually carried out by the driving force.
We are able to see that the Driver CPU Used and Driver Storage Used are nicely inside their respective allotted values. Nonetheless, upon checking Driver Reminiscence Allotted and Driver Reminiscence Used, we are able to see that the driving force was utilizing all the 16 GB reminiscence allotted to it. By default, EMR Serverless drivers are assigned 16 GB reminiscence.
Let’s rerun the job with extra driver reminiscence allotted. Let’s set driver reminiscence to 27 GB as the place to begin, as a result of spark.driver.reminiscence + spark.driver.memoryOverhead
needs to be lower than 30 GB for the default employee kind. park.rpc.messsage.maxSize
will likely be unchanged.
The job succeeded this time round. Let’s test the CloudWatch dashboard to watch driver reminiscence utilization.
As we are able to see, the allotted reminiscence is now 30 GB, however the precise driver reminiscence utilization didn’t exceed 21 GB in the course of the job run. Subsequently, we are able to additional optimize prices right here by decreasing the worth of spark.driver.reminiscence
. We reran the identical job with spark.driver.reminiscence
set to 22 GB, and the job nonetheless succeeded with higher driver reminiscence utilization.
Executor errors
Utilizing CloudWatch for observability is right for diagnosing driver-related points as a result of there is just one driver per job and driver sources used is the precise useful resource utilization of the one driver. However, executor metrics are aggregated throughout all the employees. Nonetheless, you need to use this dashboard to offer solely an satisfactory quantity of sources to make your job succeed, thereby avoiding oversubscription of sources.
As an instance, let’s run the next Spark job, which simulates uniform disk over-utilization throughout all employees by processing very giant NOAA datasets from a number of years. This job additionally transiently caches a really giant information body on disk.
After a couple of minutes, we are able to see that the job failed with “No area left on machine” error within the Job particulars part, which signifies that a number of the employees have run out of disk area.
Checking the Operating Executors metric from the dashboard, we are able to determine that there have been 99 executor employees working. Every employee comes with 20 GB storage by default.
As a result of this can be a Spark job failure, let’s test the Executor Storage Allotted and Executor Storage Used metrics from the dashboard (as a result of the driving force received’t run any duties).
As we are able to see, the 99 executors have used up a complete of 1,940 GB from the full allotted executor storage of two,126 GB. This contains each the information shuffled by the executors and the storage used for caching the information body. We don’t see the complete 2,126 GB being utilized from this graph as a result of there may be a couple of executors out of the 99 executors that weren’t holding a lot information when the job failed (earlier than these executors might begin processing duties and retailer the information body chunks).
Let’s rerun the identical job however with elevated executor disk dimension utilizing the parameter spark.emr-serverless.executor.disk
. Let’s attempt with 40 GB disk per executor as a place to begin.
This time, the job ran efficiently. Let’s test the Executor Storage Allotted and Executor Storage Used metrics.
Executor Storage Allotted is now 4,251 GB as a result of we’ve doubled the worth of spark.emr-serverless.executor.disk
. Though there’s now twice as a lot aggregated executors’ storage, the job nonetheless used solely a most of 1,940 GB out of 4,251 GB. This means that our executors had been possible working out of disk area solely by a couple of GBs. Subsequently, we are able to attempt to set spark.emr-serverless.executor.disk
to a fair decrease worth like 25 GB or 30 GB as a substitute of 40 GB to avoid wasting storage prices as we did within the earlier situation. As well as, you may monitor Executor Storage Learn Bytes and Executor Storage Write Bytes to see in case your job is I/O intensive. On this case, you need to use the Shuffle-optimized disks function of EMR Serverless to additional improve your job’s I/O efficiency.
The dashboard can be helpful to seize details about transient storage used whereas caching or persisting the information frames, together with spill-to-disk eventualities. The Storage tab of Spark Historical past Server information any caching actions, as seen within the following screenshot. Nonetheless, this information will likely be misplaced from Spark Historical past Server after the cache is evicted or when the job finishes. Subsequently, Executor Storage Used can be utilized to do an evaluation of a failed job run because of transient storage points.
On this explicit instance, the information was evenly distributed among the many executors. Nonetheless, in case you have a knowledge skew (for, instance only one–2 executors out of 99 course of essentially the most quantity of information, and in consequence, your job runs out of disk area), the CloudWatch dashboard received’t precisely seize this situation as a result of the storage information is aggregated throughout all of the executors for a job. For diagnosing points on the particular person executor stage, we have to observe per-executor-level metrics. We discover extra superior examples of how per-worker-level metrics can assist you determine, mitigate, and resolve hard-to-find points by way of EMR Serverless integration with Amazon Managed Service for Prometheus.
Conclusion
On this put up, you discovered the best way to successfully handle and optimize your EMR Serverless software utilizing a single CloudWatch dashboard with enhanced EMR Serverless metrics. These metrics can be found in all AWS Areas the place EMR Serverless is accessible. For extra particulars about this function, seek advice from Job-level monitoring.
In regards to the Authors
Kashif Khan is a Sr. Analytics Specialist Options Architect at AWS, specializing in huge information providers like Amazon EMR, AWS Lake Formation, AWS Glue, Amazon Athena, and Amazon DataZone. With over a decade of expertise within the huge information area, he possesses intensive experience in architecting scalable and strong options. His function entails offering architectural steering and collaborating intently with clients to design tailor-made options utilizing AWS analytics providers to unlock the complete potential of their information.
Veena Vasudevan is a Principal Associate Options Architect and Knowledge & AI specialist at AWS. She helps clients and companions construct extremely optimized, scalable, and safe options; modernize their architectures; and migrate their huge information, analytics, and AI/ML workloads to AWS.