Cassandra AWS CPU Guidelines

March 14, 2017

                                                                           

Cassandra CPU requirements in AWS Cloud

Cassandra is highly concurrent. Cassandra nodes can uses as many CPU cores as available if configured correctly.

What are vCPUs and ECUs?

An Amazon EC2 vCPU is a hyper thread, often referred to as a virtual core. Think of it as a physical thread of execution. It is able to run one thread at a time (which of course could be swapped out).

An Amazon ECU is some made up term that AWS used to use which was the power of the Intel Pentium chip that they used on the earliest incarnations of EC2. 50 ECU would be like 50 Pentium chips from a bygone era. Ignore ECUs.

EC2 note: the newer generations of EC2 instances seem to be claiming hyperthreading cores as real cores. If you take a look at /proc/cpuinfo on an i2.2xlarge, you will see 8 cores assigned to the system. If you look a little closer, you can see the “sibling id” field indicates that half of those cores are indeed hyperthreading cores, so you in effect only have 4 cores worth of silicon in those VMs. Al Tobey Writes Blog: Cassandra tuning guide: a word on hyperthreading

Just keep in mind that a vCPU is a virtual core aka hyperthread or sibling core. This is the standard terminology for virtualization, and continue to ignore ECUs.

This document provides requirements and guidelines for picking the right EC2 instance type with the right number of vCPUs.

Cassandra High writes can be CPU bound

Cassandra clusters workloads that do a lot of writing (insert heavy), can be CPU-bound. This can be multiplied if using JBOD, where a single node is managing 4 or 8 volumes. Cassandra is efficient for writes, but this is largely due to doing a structured merge sort during compaction, which is CPU intensive. Often the CPU becomes the limiting factor for writes.

Since writes are almost never IO bound, the ideal number of concurrent_writes is dependent on the number of cores in Cassandra node. Thus set concurrent_writes in cassandra.yaml to 8 x vCPU is a good rule of thumb for EBS and 4 x vCPU for EC2 instance storage.

The concurrent_compactors should be set to the # of vCPUs if using SSDs, and set to the number of attached EBS volumes / disks for JBOD. The point, that the more CPU resources your Cassandra node has, the faster the compaction throughput. See this Cassandra tuning guide for more information, and this JIRA ticket. If you are having GC issues, you can limit concurrent_compactors to 4. G1GC GC should be able handle the larger amount of concurrent_compactors, but it is an area of concern.

Compaction strategy can influence vCPU usage

SizeTieredCompactionStrategy works with larger SSTables so has more spikey CPU usage. LeveledCompactionStrategy will use a more even level of CPU.

We hope this information on CPU requirements for Cassandra running in EC2/AWS informative. We provide Casandra consulting and Kafka consulting to get you setup fast in AWS with CloudFormation and CloudWatch. Check out our Casandra training and Kafka training as well. Cloudurable specialize in AWS DevOps Automation for Cassandra and Kafka.

In general use 4 to 8 vCPUs for Cassandra Nodes minimum

You need at least 4 cores but prefer 8 cores for a production machine. Compaction, compression, key lookup based on bloom filters, SSL if enabled, will all need CPU resources. The m4.xlarge falls a bit behind for this as it only has 4 vCPUs (4 cores). The m4.2xlarge has 8 vCPUs which should be able to handle most production loads nicely. The i2.xlarge (for high random read) and d2.xlarge for high writes and long sequential reads are also a little light on CPU power. Consider i3.2xlarge and d2.2xlarge for production workloads as they have 8 vCPUs.

CPU Usage for GC1 Garbage collector and CMS

Both the GC1 Garbage collector and the CMS garbage collector benefit from having more threads. When working with large Java heap sizes, GC1 and CMS can benefit from parallel processing which requires more CPUs.

CMS has a habit of turning memory into Swiss cheese and over time, needing a full, stop the world garbage collection. GC1 does not have this same problem with memory fragmentation, which is why CMS is deprecated in Java 9.

Multi-DC / Multi Region need more CPU resources

If you are using multiple regions, i.e., a multi-dc deployment then you will want to increase the max_hints_delivery_threads as cross DC handoffs are slower. Also keep in mind for cluster/storage communication that there is more CPU overhead, which might be a wash if the DC to DC communication has a lot of latency.

Cassandra allows one outbound hint thread per Cassandra node. The maximum inbound hint streaming per node will still be hinted_handoff_throttle_in_kb. You can safely increase max_hints_delivery_threads without worrying about overwhelming a single node. See Bandwidth Required for Cassandra Hinted Handoff for more details about the math. How many threads do you need? How eventually consistent do you want to be between data-centers? (And how long will these threads be waiting for IO? Depends on the latency of the network and your throttle rate.) Increase this to 16 or more for two DCs. Increase it to 32 or more for multiple DCs. For single DC deployments, set it to 12 the number of nodes in the system or less.

With Cassandra, it is important not to just consider the happy case, but the unhappy case like a DC went down for a few hours, and came back up right during peak usage. It is good to have the extra CPU. The more nodes or latency between, the more vCPU you might want to have.

Cassandra workloads with large datasets

For Cassandra workloads that can’t fit into main memory, Cassandra’s bottleneck will be reads that need to fetch data from disk (EBS Volume or local storage). The concurrent_reads should be set to (16 * number_of_drives) so you have the potential with JBOD 4 of having 64 read threads.

Doing a lot of streaming writes between nodes? Increase memtable_flush_writers

If you are streaming a lot of data from many nodes, you need to increase the number of flush writers (memtable_flush_writers). Avoid the streams all hitting the memtables. If you do not have enough writers to deal with a (larger than normal) amount of data hitting them you can cause your streams to fail. The recommendation is to set memtables_flush_writers equal to the number of vCPUs on the EC2 Cassandra node instance. More vCPUs allows more throughput for writes. Read Scale it to Billions — What They Don’t Tell you in the Cassandra README for more details.

Recall, if your data directories are backed by instance storage SSD, you can increase this using memtable_flush_writers * data_file_directories <= # of vCPU. If you are using instance storage HDDs or EBS SSD use memtable_flush_writers: #vCPUs. Do not leave this set to 1.

Horizontal scale is not always easy with Cassandra

When you are using Cassandra for super high-speed writes or using it with very large datasets, you may have to scale up your Cassandra nodes and add more vCPU and memory. Vertical scale-up is also needed in some cases.

Cloudurable provides Cassandra training, Cassandra consulting, Cassandra support and helps setting up Cassandra clusters in AWS.

When horizontally scaling, Cassandra does a lot of streaming, you could make EC2 instances larger (gradually), add your nodes, let Cassandra stream data to new nodes, and then gradually make the Cassandra instances smaller again. You would employ EBS snapshots, take nodes offline, resize them, bring them back online, let them recover, repeat.

Once you get into larger EC2 instance sizes to use with Cassandra, then NUMA concerns come into play.

References

More info about Cassandra and AWS

Read more about Cassandra AWS with this slide deck.

Amazon has a guide that covers Cassandra on AWS that is a must read There is also this Amazon Cassandra guide on High Scalability that is a must read.

About Cloudurable™

Cloudurable™: streamline DevOps/DBA for Cassandra running on AWS. Cloudurable™ provides AMIs, CloudWatch Monitoring, CloudFormation templates and monitoring tools to support Cassandra in production running in EC2. We also teach advanced Cassandra courses which teaches how one could develop, support and deploy Cassandra to production in AWS EC2 for Developers and DevOps/DBA. We also provide Cassandra consulting and Cassandra training.

Follow Cloudurable™ at our LinkedIn page, Facebook page, Google plus or Twitter.

More info about Cloudurable

Please take some time to read the Advantage of using Cloudurable™.

Cloudurable provides:

Authors

Written by R. Hightower and JP Azar.

Feedback


We hope you enjoyed this article. Please provide feedback.

About Cloudurable

Cloudurable provides Cassandra training, Cassandra consulting, Cassandra support and helps setting up Cassandra clusters in AWS. Cloudurable also provides Kafka training, Kafka consulting, Kafka support and helps setting up Kafka clusters in AWS.

Check out our new GoLang course. We provide onsite Go Lang training which is instructor led.

                                                                           

Apache Spark Training
Kafka Tutorial
Akka Consulting
Cassandra Training
AWS Cassandra Database Support
Kafka Support Pricing
Cassandra Database Support Pricing
Non-stop Cassandra
Watchdog
Advantages of using Cloudurable™
Cassandra Consulting
Cloudurable™| Guide to AWS Cassandra Deploy
Cloudurable™| AWS Cassandra Guidelines and Notes
Free guide to deploying Cassandra on AWS
Kafka Training
Kafka Consulting
DynamoDB Training
DynamoDB Consulting
Kinesis Training
Kinesis Consulting
Kafka Tutorial PDF
Redis Consulting
Redis Training
ElasticSearch / ELK Consulting
ElasticSearch Training
InfluxDB/TICK Training TICK Consulting