Cassandra 5.0 Cluster Tutorial 2025: Ansible Automation for DevOps Tasks

January 9, 2025

                                                                           

What’s New in 2025

Key Updates and Changes

  • Cassandra 5.0: Storage Attached Indexes (SAI), Vector Search, Unified Compaction
  • Ansible 2.19: Event-driven automation, enhanced cloud integrations
  • VirtualBox Compatibility: Use 6.1.x with Vagrant 2.4.1 for stability
  • Security First: Ansible Vault and external secret managers now standard
  • Infrastructure as Code: Git-based workflows with Ansible Collections

Deprecated Features

  • Cassandra 3.x is end-of-life
  • Legacy Ansible inventory formats
  • Manual SSH key management (use automation)
  • Static inventories for cloud environments

Cassandra Tutorial: Setting up Ansible for our Cassandra Database Cluster for DevOps/DBA tasks

Ansible is a key DevOps/DBA tool for managing backups and rolling upgrades to the Cassandra cluster in AWS/EC2. Ansible uses ssh, so you do not have to install an agent to use it. In 2025, Ansible remains the preferred automation tool with improved event-driven capabilities.

This article series focuses on DevOps/DBA tasks with the Cassandra Database. The use of Ansible for DevOps/DBA goes beyond the Cassandra Database. This article helps any DevOps/DBA or Developer that needs to manage groups of instances, boxes, or hosts. These can be on-prem bare-metal, dev boxes, or in the Cloud. You don’t need to be setting up Cassandra to benefit from this article.

This was later split into two parts.

The most up to date versions will be in the above two links.

Cassandra Tutorials: Tutorial Series on DevOps/DBA Cassandra Database

The first article in this series was about setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as DZone Setting up a Cassandra Cluster with SSL). You don’t need those articles to follow along, but they might provide a lot of contexts. You can find the source for the first and second article at our Cloudurable Cassandra Image for Docker, AWS, and Vagrant. In later articles, we will use Ansible to create more complicated playbooks like doing a rolling Cassandra upgrade, and we will cover using Ansible/ssh with AWS EC2.

Source code for Vagrant, and ansbile to create Cassandra Cluster

We continue to evolve the cassandra-image GitHub project. In an effort for the code to match the listings in the article, we created a new branch where the code was when this article was written (more or less): Article 3 Ansible Cassandra Vagrant.

Where do you go if you have a problem or get stuck?

We set up a google group for this project and set of articles. If you just can’t get something to work or you are getting an error message, please report it here. Between the mailing list and the github issues, we can support you with quite a few questions and issues. You can also find new articles in this series by following Cloudurable™ at our LinkedIn page, Facebook page, Google plus or Twitter.

Let’s get to it. Let’s start by creating a key for our DevOps/DBA test Cassandra cluster.

Create key for test cluster to do Cassandra Database DevOps/DBA tasks with Ansible

To use Ansible for DevOps/DBA, we need to setup ssh keys. Ansible uses ssh instead of running an agent on each server like Chef and Puppet.

The tool ssh-keygen manages authentication keys for ssh (secure shell). The utility ssh-keygen creates RSA or DSA keys for SSH (secure shell) protocol version 1 and 2. You can specify the key type with the -t option. In 2025, ED25519 keys are recommended for better security, but RSA remains widely supported.

setup key script bin/setupkeys-cassandra-security.sh

CLUSTER_NAME=test
...
# Use ED25519 for better security in 2025
ssh-keygen -t ed25519 -C "cassandra-cluster-2025" -N "" \
 -f "$PWD/resources/server/certs/${CLUSTER_NAME}_ed25519"

# Fallback RSA key for compatibility
ssh-keygen -t rsa -b 4096 -C "cassandra-cluster-2025" -N "" \
 -f "$PWD/resources/server/certs/${CLUSTER_NAME}_rsa"

chmod 400 "$PWD/resources/server/certs/"*
cp "$PWD/resources/server/certs/"* ~/.ssh
...

Let’s break that down.

We use ssh-keygen to create private keys that we will use to log into our boxes. In 2025, we create both ED25519 (preferred) and RSA (fallback) keys.

In this article those boxes are Vagrant boxes (VirtualBox), but in the next article, we will use the same key to manage EC2 instances.

Check out our Cassandra training and Kafka training. We specialize in AWS DevOps Automation for Cassandra and Kafka.

Use ssh-keygen to create private key for ssh

# ED25519 - Recommended for 2025
ssh-keygen -t ed25519 -C "cassandra-cluster-2025" -N "" \
 -f "$PWD/resources/server/certs/${CLUSTER_NAME}_ed25519"

# RSA with 4096 bits for compatibility
ssh-keygen -t rsa -b 4096 -C "cassandra-cluster-2025" -N "" \
 -f "$PWD/resources/server/certs/${CLUSTER_NAME}_rsa"

Then we restrict the access to the file of the key otherwise, ansible, ssh and scp (secure copy) will not let us use it.

Change the access of the key


chmod 400 "$PWD/resources/server/certs/"*

The above chmod 400 changes the cert files so only the owner can read the file. This file change mod makes sense. The certification file should be private to the user. That is what 400 does.

Copy keys to area where it will be copied by provisioning

cp "$PWD/resources/server/certs/"* ~/.ssh

The above just puts the files where our provisioners (Packer and Vagrant) can pick them up and deploy them with the image.

Locally we are using Vagrant to launch a cluster to do some tests on our laptop. Note that VirtualBox 7.1.x requires manual workarounds with Vagrant 2.4.1, so VirtualBox 6.1.x is recommended for stability.

We also use Packer and aws command line tools to create EC2 AMIs (and Docker images), but we don’t cover aws in this article (it is in the very next which is sort of part 2 to this article).

Create a bastion server to do ansible DevOps/DBA tasks

We plan to use a bastion server in a public subnet to send commands to our Cassandra Database nodes. These nodes will be in a private subnet in EC2. For local testing we set up a bastion server, which is well explained in this guide to Vagrant and Ansible.

We used Learning Ansible with Vagrant (Part 2/4) as a guide for some of the setup in this article. It provides solid Ansible and Vagrant knowledge for DevOps/DBA. Their mgmt node corresponds to what we call a bastion server. We use CentOS 7 not Ubuntu. We also made updates for newer Ansible versions.

We added a bastion server to our Vagrant config as follows:

Vagrantfile to set up the bastion for our Cassandra Cluster


  # Define Bastion Node
  config.vm.define "bastion" do |node|
            node.vm.network "private_network", ip: "192.168.50.20"
            node.vm.provider "virtualbox" do |vb|
                   vb.memory = "512"
                   vb.cpus = 2
            end


            node.vm.provision "shell", inline: <<-SHELL
                yum install -y epel-release
                yum update -y
                yum install -y ansible python3-pip

                # Install ansible collections for 2025
                ansible-galaxy collection install community.general
                ansible-galaxy collection install amazon.aws

                mkdir /home/vagrant/resources
                cp -r /vagrant/resources/* /home/vagrant/resources/

                mkdir -p ~/resources
                cp -r /vagrant/resources/* ~/resources/

                mkdir  -p  /home/vagrant/.ssh/
                cp /vagrant/resources/server/certs/*  /home/vagrant/.ssh/

                sudo  /vagrant/scripts/002-hosts.sh

                ssh-keyscan -t ed25519,rsa node0 node1 node2  >> /home/vagrant/.ssh/known_hosts


                mkdir ~/playbooks
                cp -r /vagrant/playbooks/* ~/playbooks/
                sudo cp /vagrant/resources/home/inventory.ini /etc/ansible/hosts
                
                # Create ansible.cfg for 2025 best practices
                cat > /home/vagrant/ansible.cfg <<EOF
[defaults]
host_key_checking = False
inventory = /etc/ansible/hosts
remote_user = vagrant
interpreter_python = auto_silent

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
pipelining = True
EOF
                
                chown -R vagrant:vagrant /home/vagrant
            SHELL

The bastion server which could be on a public subnet in AWS in a VPC uses the ssh-keyscan to add nodes that we setup in the host file into /home/vagrant/.ssh/known_hosts. In 2025, we scan for both ED25519 and RSA key types.

Running ssh-keyscan

ssh-keyscan -t ed25519,rsa node0 node1 node2  >> /home/vagrant/.ssh/known_hosts

This utility avoids the need to verify nodes. It prevents the error message: The authenticity of host ... can't be established. ... Are you sure you want to continue connecting (yes/no)? no when running ansible command line tools.

Modify the Vagrant provision script

Since we are using provision files to create different types of images (Docker, EC2 AMI, Vagrant/VirtualBox), then we use a provisioning script specific to vagrant.

In this vagrant provision script, we call another provision script to setup a hosts file.

000-vagrant-provision.sh

mkdir  -p  /home/vagrant/.ssh/
cp /vagrant/resources/server/certs/*  /home/vagrant/.ssh/
...

scripts/002-hosts.sh
echo RUNNING TUNE OS

Setting up sshd on our Cassandra Database nodes in our DevOps Cassandra Cluster

The provision script 002-hosts.sh configures /etc/ssh/sshd_config/sshd_config to allow public key auth. Then it restarts the daemon for ssh communication sshd. (The other provisioning scripts it invokes was covered in the first two articles).

Let’s look at the 002-hosts.sh provision script. You can see some remnants from the last article where we setup csqlsh, and then it gets to business setting up sshd (secure server shell daemon).

scripts/002-hosts.sh - sets up sshd and hosts file

#!/bin/bash
set -e



## Copy csqlshrc file that controls csqlsh connections to ~/.cassandra/cqlshrc.
mkdir ~/.cassandra
cp ~/resources/home/.cassandra/cqlshrc ~/.cassandra/cqlshrc

## Allow pub key login to ssh - Updated for 2025
sed -ie 's/#PubkeyAuthentication no/PubkeyAuthentication yes/g' /etc/ssh/sshd_config
sed -ie 's/#PubkeyAuthentication yes/PubkeyAuthentication yes/g' /etc/ssh/sshd_config

# Enable ED25519 keys explicitly
echo "PubkeyAcceptedKeyTypes=+ssh-ed25519,ssh-rsa" >> /etc/ssh/sshd_config

## System control restart sshd daemon to take sshd_config into effect.
systemctl restart sshd

# Create host file so it is easier to ssh from box to box
cat >> /etc/hosts <<EOL

192.168.50.20  bastion

192.168.50.4  node0
192.168.50.5  node1
192.168.50.6  node2
192.168.50.7  node3
192.168.50.8  node4
192.168.50.9  node5
EOL

This setup is specific to our Vagrant setup at this point. To simplify access to the servers with Cassandra Database nodes, the 002-hosts.sh creates an \etc\hosts\ file on the bastion server. In 2025, we enable both ED25519 and RSA key types for broader compatibility.

With our certification keys added to sshd config and our hosts configured (and our inventory.ini file shipped), we can start using ansible from our bastion server.

This reminds me, we did not talk about the ansible inventory.ini file.

Ansible config on bastion for Cassandra Database Cluster

Ansible has an ansible.cfg file and an inventory.ini file. When you run ansible, it checks for ansible.cfg in the current working directory, then your home directory, and then for a master config file (/etc/ansible). We created an inventory.ini file which lives under ~\github\cassandra-image\resources\home, which gets mapped to \vagrant\resources\home on the virtual machines (node0, bastion, node1, and node2). A provision script moves the inventory.ini file to its proper location (sudo cp /vagrant/resources/home/inventory.ini /etc/ansible/hosts).

The inventory.ini contains servers that you want to manage with Ansible. We have a bastion group for our bastion server. We also have the nodes group made up of node0, node1, and node2.

Let’s see what the inventory.ini file actually looks like.

inventory.ini that gets copied to Ansible master list on Bastion

[bastion]
bastion ansible_python_interpreter=/usr/bin/python3

[nodes]
node0 ansible_python_interpreter=/usr/bin/python3
node1 ansible_python_interpreter=/usr/bin/python3
node2 ansible_python_interpreter=/usr/bin/python3

[nodes:vars]
ansible_user=vagrant
ansible_ssh_private_key_file=~/.ssh/test_ed25519

In 2025, we specify Python 3 interpreter and use ED25519 keys by default.

Once we provision our cluster, we can log into bastion and start executing ansible commands.

Installing cert key for test DevOps/DBA Cassandra Cluster on all nodes using an ansible playbook

To make this happen, we had to tell the other servers about our certification keys.

We did this with an ansible playbook as follows:

Ansible playbook getting invoked from Vagrant on each new Cassandra Database node


Vagrant.configure("2") do |config|


  config.vm.box = "centos/7"


  # Define Cassandra Nodes
  (0..numCassandraNodes-1).each do |i|

        port_number = i + 4
        ip_address = "192.168.50.#{port_number}"
        seed_addresses = "192.168.50.4,192.168.50.5,192.168.50.6"
        config.vm.define "node#{i}" do |node|
            node.vm.network "private_network", ip: ip_address
            node.vm.provider "virtualbox" do |vb|
                   vb.memory = "4096"
                   vb.cpus = 4
            end
            ...

            node.vm.provision "ansible" do |ansible|
                  ansible.playbook = "playbooks/ssh-addkey.yml"
                  ansible.extra_vars = {
                    ansible_python_interpreter: "/usr/bin/python3"
                  }
            end
        end
  end

Notice the line node.vm.provision "ansible" do |ansible| and ansible.playbook = "playbooks/ssh-addkey.yml".

If you are new to Vagrant and the above just is not making sense, please watch Vagrant Crash Course. It is by the same folks (guy) who created the Ansible series.

Ansible playbooks are like configuration playbooks. You can perform tons of operations that are important for DevOps (like yum installing software, specific tasks to Cassandra, etc.).

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. –Ansible Playbook documentation.

Here is the ansible playbook to add the RSA public key to the cassandra nodes as follows.

Ansible playbook ssh-addkey.yml to add test_ed25519.pub to all Cassandra Database node servers

---
- hosts: all
  become: true
  gather_facts: no
  remote_user: vagrant

  tasks:

  - name: install ED25519 ssh key
    authorized_key: 
      user: vagrant
      key: "{{ lookup('file', '../resources/server/certs/test_ed25519.pub') }}"
      state: present

  - name: install RSA ssh key (fallback)
    authorized_key: 
      user: vagrant
      key: "{{ lookup('file', '../resources/server/certs/test_rsa.pub') }}"
      state: present

The trick here is that Vagrant supports running Ansible playbooks as well. In 2025, we install both ED25519 and RSA keys for maximum compatibility.

The Vagrant Ansible provisioner allows you to provision the guest using Ansible playbooks by executing ansible-playbook from the Vagrant host. –(Vagrant Ansible documentation)[https://www.vagrantup.com/docs/provisioning/ansible.html]

For users who did not read any of the first articles on setting up the Cassandra Cluster

If you have not done so already navigate to the project root dir (which is ~/github/cassandra-image on my dev box), download the binaries. The source code is at Github Cassandra Image project.

Running setup scripts

## cd ~; mkdir github; cd github; git clone https://github.com/cloudurable/cassandra-image
$ cd ~/github/cassandra-image
$ pwd
~/github/cassandra-image
## Setup keys
$ bin/setupkeys-cassandra-security.sh
## Download binaries (Cassandra 5.0)
$ bin/prepare_binaries.sh
## Bring Vagrant cluster up
$ vagrant up

Even if you read the first article note that bin/prepare_binaries.sh is something we added after the first two articles. It downloads the binaries needed for the provisioning (including Cassandra 5.0), does a checksum of the files and then installs them as part of the provisioning process.

Where do you go if you have a problem or get stuck?

We set up a google group for this project and set of articles. If you just can’t get something to work or you are getting an error message, please report it here. Between the mailing list and the github issues, we can support you with quite a few questions and issues.

Running ansible commands from bastion

Let’s log into bastion and run ansible commands against the cassandra nodes.

Working with ansible from bastion and using ssh-agent

$ vagrant ssh bastion

So we don’t have to keep logging in, and passing our cert key, let’s start up an ssh-agent and add our cert keys to the agent.

The ssh-agent holds private keys used for public key authentication (RSA, DSA, ECDSA, Ed25519). This means you don’t have to keep passing the keys around. The ssh-agent usually starts at the beginning of a login session. Other programs (scp, ssh, ansible) start as clients to the ssh-agent utility.

Mastering ssh is key for DevOps and needed for ansible.

First set up ssh-agent and add keys to it with ssh-add.

Start ssh-agent and add keys

$ ssh-agent bash
$ ssh-add ~/.ssh/test_ed25519
$ ssh-add ~/.ssh/test_rsa

With the agent running and our keys added, we can use ansible without passing it the private key.

Let’s verify connectivity, by pinging some of these machines. Let’s ping the node0 machine. Then let’s ping all of the nodes.

Let’s use the ansible ping module to test the node0 server.

Ansible Ping the Cassandra Database node

$ ansible node0 -m ping

Output

node0 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

To learn more about DevOps with ansible see this video on Ansible introduction. It covers a lot of the basics of ansible.

Now let’s ping all of the nodes.

Ansible Ping all Cassandra Database Cluster nodes

$ ansible nodes  -m ping

Output

node0 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
node2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
node1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Looks like bastion can run ansible against all of the servers.

Setting up my MacOSX to run Ansible against Cassandra Database Cluster nodes

The script ~/github/cassandra-image/bin/setupkeys-cassandra-security.sh copies the test cluster key for ssh (secure shell) over to ~/.ssh/ (cp "$PWD/resources/server/certs/"* ~/.ssh). It was Run from the project root folder which is ~/github/cassandra-image on my box.

Move to the where you checked out the project.

cd ~/github/cassandra-image

In this folder is an ansible.cfg file and an inventory.ini file for local dev. Before you use these first modify your /etc/hosts file to configure entries for bastion, node0, node1, node2 servers.

Add bastion, node0, etc. to /etc/hosts

$ cat /etc/hosts

### Used for ansible/ vagrant
192.168.50.20  bastion
192.168.50.4  node0
192.168.50.5  node1
192.168.50.6  node2
192.168.50.7  node3
192.168.50.8  node4
192.168.50.9  node5

We can use ssh-keyscan just like we did before to add these hosts to our known_hosts file.

Add keys to known_hosts to avoid prompts

$ ssh-keyscan -t ed25519,rsa node0 node1 node2  >> ~/.ssh/known_hosts

Then just like before we can start up an ssh-agent and add our keys.

Start ssh-agent and add keys

$ ssh-agent bash
$ ssh-add ~/.ssh/test_ed25519
$ ssh-add ~/.ssh/test_rsa

The ansible.cfg and inventory.ini files differ from our bastion server. We need to add the user name.

Notice the ansible.cfg file and inventory.ini file in the project dir

$ cd ~/github/cassandra-image

$ cat ansible.cfg
[defaults]
hostfile = inventory.ini
host_key_checking = False
interpreter_python = auto_silent

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
pipelining = True

cat inventory.ini
[nodes]
node0 ansible_user=vagrant ansible_python_interpreter=/usr/bin/python3
node1 ansible_user=vagrant ansible_python_interpreter=/usr/bin/python3
node2 ansible_user=vagrant ansible_python_interpreter=/usr/bin/python3

Ansible will use these.

From the project directory, you should be able to ping node0 and all of the nodes just like before.

Ping node0 with ansible.

Ansible Ping Cassandra Database node

$ ansible node0 -m ping

Output

node0 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Ping all of the Cassandra nodes with ansible.

Ansible Ping All Cassandra Database Cluster nodes

$ ansible nodes  -m ping

Output

node0 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
node2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
node1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

In the next article, we cover how to setup ~.ssh/config so you don’t have to remember to use ssh-agent.

Using ansible to run nodetool on Cassandra Cluster nodes

You may recall from the first article that we would log into the servers (vagrant ssh node0). Then we would check that they could see the other nodes with nodetool describecluster. We could run this command with all three servers (from bastion or on our dev laptop) with ansible.

Let’s use ansible to run describecluster against all of the nodes.

Ansible running nodetool describecluster against all Cassandra Cluster nodes

$ ansible nodes -a "/opt/cassandra/bin/nodetool describecluster"

This command allows us to check the status of every node quickly.

Let’s say we want to update a schema or do a rolling restart of our Cassandra cluster nodes. This is a common task. Before the update, we might want to decommission the node and back things up. To do this sort of automation, we could create an Ansible playbook.

Ansible Playbooks are more powerful than executing ad-hoc tasks. They are particularly powerful for managing a cluster of Cassandra servers.

Playbooks allow configuration management and multi-machine deployment. They manage complex tasks like rolling upgrades, schema updates, or weekly backups.

Playbooks are declarative configurations. Ansible Playbooks also orchestrate steps into a simpler task. This automation removes manual processes and allows for immutable infrastructure.

Our describe-cluster playbook for Cassandra Database Cluster nodes

Creating a complex playbook is beyond the scope of this article. But let’s create a simple playbook and execute it. This playbook will run the nodetool describecluster on each node.

Here is our playbook that runs Cassandra nodetool describecluster on each Cassandra node in our cluster.

playbooks/descibe-cluster.yml - simple ansible playbook that runs Cassandra nodetool describecluster

---
- hosts: nodes
  gather_facts: no
  remote_user: vagrant

  tasks:

  - name: Run NodeTool Describe Cluster command against each Cassandra Cluster node
    command: /opt/cassandra/bin/nodetool describecluster
    register: result
    
  - name: Display cluster information
    debug:
      var: result.stdout_lines

To run this, we use ansible-playbook as follow.

Running describe-cluster playbook

$ ansible-playbook playbooks/describe-cluster.yml --verbose

Advanced Ansible Features for 2025

In 2025, consider these advanced Ansible features for Cassandra management:

Event-Driven Automation

---
- name: Cassandra Health Check with Event Response
  hosts: nodes
  tasks:
    - name: Check Cassandra status
      command: /opt/cassandra/bin/nodetool status
      register: status_check
      
    - name: Trigger alert on node down
      uri:
        url: "https://alerts.example.com/cassandra"
        method: POST
        body_format: json
        body:
          node: "{{ inventory_hostname }}"
          status: "DOWN"
      when: "'DN' in status_check.stdout"

Using Ansible Collections

---
- hosts: nodes
  collections:
    - community.cassandra
    - amazon.aws
  
  tasks:
    - name: Backup Cassandra to S3
      community.cassandra.cassandra_backup:
        keyspace: my_keyspace
        s3_bucket: cassandra-backups-2025
        compress: true

Between this article and the last, we modified our Vagrantfile quite a bit. It now uses a for loop to create the Cassandra nodes, and it uses ansible provisioning.

Here is our new Vagrantfile with updates:

Complete code listing of Vagrantfile that sets up our DevOps/DBA Cassandra Database Cluster

# -*- mode: ruby -*-
# vi: set ft=ruby :

numCassandraNodes = 3

Vagrant.configure("2") do |config|


  config.vm.box = "centos/7"


  # Define Cassandra Nodes
  (0..numCassandraNodes-1).each do |i|

        port_number = i + 4
        ip_address = "192.168.50.#{port_number}"
        seed_addresses = "192.168.50.4,192.168.50.5,192.168.50.6"
        config.vm.define "node#{i}" do |node|
            node.vm.network "private_network", ip: ip_address
            node.vm.provider "virtualbox" do |vb|
                   vb.memory = "4096"
                   vb.cpus = 4
            end


            node.vm.provision "shell", inline: <<-SHELL

                sudo /vagrant/scripts/000-vagrant-provision.sh



                sudo /opt/cloudurable/bin/cassandra-cloud -cluster-name test \
                -client-address     #{ip_address} \
                -cluster-address    #{ip_address} \
                -cluster-seeds      #{seed_addresses}

            SHELL

            node.vm.provision "ansible" do |ansible|
                  ansible.playbook = "playbooks/ssh-addkey.yml"
                  ansible.extra_vars = {
                    ansible_python_interpreter: "/usr/bin/python3"
                  }
            end
        end
  end


  # Define Bastion Node
  config.vm.define "bastion" do |node|
            node.vm.network "private_network", ip: "192.168.50.20"
            node.vm.provider "virtualbox" do |vb|
                   vb.memory = "512"
                   vb.cpus = 2
            end


            node.vm.provision "shell", inline: <<-SHELL
                yum install -y epel-release
                yum update -y
                yum install -y ansible python3-pip

                # Install ansible collections
                ansible-galaxy collection install community.general
                ansible-galaxy collection install amazon.aws

                mkdir /home/vagrant/resources
                cp -r /vagrant/resources/* /home/vagrant/resources/

                mkdir -p ~/resources
                cp -r /vagrant/resources/* ~/resources/

                mkdir  -p  /home/vagrant/.ssh/
                cp /vagrant/resources/server/certs/*  /home/vagrant/.ssh/

                sudo  /vagrant/scripts/002-hosts.sh

                ssh-keyscan -t ed25519,rsa node0 node1 node2  >> /home/vagrant/.ssh/known_hosts


                mkdir ~/playbooks
                cp -r /vagrant/playbooks/* ~/playbooks/
                sudo cp /vagrant/resources/home/inventory.ini /etc/ansible/hosts
                
                # Create modern ansible.cfg
                cat > /home/vagrant/ansible.cfg <<EOF
[defaults]
host_key_checking = False
inventory = /etc/ansible/hosts
remote_user = vagrant
interpreter_python = auto_silent

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
pipelining = True
EOF
                
                chown -R vagrant:vagrant /home/vagrant
            SHELL


  end



  #
  # View the documentation for the provider you are using for more
  # information on available options.

  # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
  # such as FTP and Heroku are also available. See the documentation at
  # https://docs.vagrantup.com/v2/push/atlas.html for more information.
  config.push.define "atlas" do |push|
     push.app = "cloudurable/cassandra"
  end


end

Conclusion

We set up Ansible for our Cassandra Database Cluster to do automate common DevOps/DBA tasks. We created ssh keys (both ED25519 and RSA). Then we set up our instances with these keys so we could use ssh, scp, and ansible. We set up a bastion server with Vagrant. We used ansible playbook (ssh-addkey.yml) from Vagrant to install our test cluster key on each server. We ran ansible ping against a single server. We ran ansible ping against many servers (nodes). We set up our local dev machine with ansible.cfg and inventory.ini so we could use ansible commands direct to node0 and nodes. We ran nodetool describecluster against all of the nodes from our dev machine. Finally, we created a simple playbook that can run nodetool describecluster.

In 2025, Ansible remains a powerful tool for managing Cassandra clusters. It offers improved features like event-driven automation, better cloud integrations, and stronger security practices. In later articles, we will use Ansible to create more complex playbooks like backing up Cassandra nodes to S3 using Cassandra 5.0’s new features.

Next up

The next article picks up where this one left off. It covers Cloud DevOps and using Packer, Ansible/SSH and AWS command line tools to create and manage EC2 Cassandra instances in AWS with Ansible. This next article continues from this one. It helps developers and DevOps/DBA staff who want to create AWS AMI images and manage EC2 instances with Ansible.

The next article covers the following:

  • Creating images (EC2 AMIs) with Packer
  • Using Packer from Ansible to provision an image (AWS AMI)
  • Installing systemd services that depend on other services and will auto-restart on failure
  • AWS command line tools to launch an EC2 instance
  • Setting up ansible to manage our EC2 instance (ansible uses ssh)
  • Setting up a ssh-agent and adding ssh identities (ssh-add)
  • Setting ssh using ~/.ssh/config so we don’t have to pass credentials around
  • Using ansible dynamic inventory with EC2
  • AWS command line tools to manage DNS entries with Route 53

If you are doing DevOps with AWS, Ansible dynamic inventory management with EC2 is excellent. Mastering ssh config is also a must. You should master the AWS command line tools to automate common tasks. This next article explores all of those topics.

About Cloudurable™

Cloudurable™ streamlines DevOps/DBA for Cassandra running on AWS. Cloudurable™ provides AMIs, CloudWatch Monitoring, CloudFormation templates and monitoring tools. These support Cassandra in production running in EC2. We also teach advanced Cassandra courses that show how to develop, support and deploy Cassandra to production in AWS EC2 for Developers and DevOps/DBA. We also provide Cassandra consulting and Cassandra training.

Follow Cloudurable™ at our LinkedIn page, Facebook page, Google plus or Twitter.

More info about Cloudurable

Please take some time to read the Advantage of using Cloudurable™.

Cloudurable provides:

Authors

Written by R. Hightower and JP Azar.

Resources

Feedback


We hope you enjoyed this article. Please provide [feedback](https://cloudurable.com/contact/index.html).
#### About Cloudurable Cloudurable provides [Cassandra training](https://cloudurable.com/cassandra-course/index.html "Onsite, Instructor-Led, Cassandra Training"), [Cassandra consulting](https://cloudurable.com/kafka-aws-consulting/index.html "Cassandra professional services"), [Cassandra support](https://cloudurable.com/subscription_support/index.html) and helps [setting up Cassandra clusters in AWS](https://cloudurable.com/services/index.html). Cloudurable also provides [Kafka training](https://cloudurable.com/kafka-training/index.html "Onsite, Instructor-Led, Kafka Training"), [Kafka consulting](https://cloudurable.com/kafka-aws-consulting/index.html), [Kafka support](https://cloudurable.com/subscription_support/index.html) and helps [setting up Kafka clusters in AWS](https://cloudurable.com/services/index.html).

Check out our new GoLang course. We provide onsite Go Lang training which is instructor led.

                                                                           
comments powered by Disqus

Apache Spark Training
Kafka Tutorial
Akka Consulting
Cassandra Training
AWS Cassandra Database Support
Kafka Support Pricing
Cassandra Database Support Pricing
Non-stop Cassandra
Watchdog
Advantages of using Cloudurable™
Cassandra Consulting
Cloudurable™| Guide to AWS Cassandra Deploy
Cloudurable™| AWS Cassandra Guidelines and Notes
Free guide to deploying Cassandra on AWS
Kafka Training
Kafka Consulting
DynamoDB Training
DynamoDB Consulting
Kinesis Training
Kinesis Consulting
Kafka Tutorial PDF
Kubernetes Security Training
Redis Consulting
Redis Training
ElasticSearch / ELK Consulting
ElasticSearch Training
InfluxDB/TICK Training TICK Consulting