Kubernetes StatefulSet with ZooKeeper

Kubernetes StatefulSet with ZooKeeper as an example

Background

We were having a hard time deploying Kafka to Kubernetes. It worked fine when we were doing development and integration. We started with Minikube for local development.

If you are not interested in the background and want to skip to the meat of the matter go ahead and skip ahead.

We created a MicroSerivce that uses Kafka in Spring Boot. We ran Kafka in minikube with Helm 2. By the way, Minikube is a mini Kubernetes that easily runs on macOS, Linux, and Windows. Minikube is great for local application development and supports a lot of Kubernetes. It is great for local testing and we also used it for integration testing.

Continue reading

kubectl cheatsheet (OSX/Mac)

These k8s/helm/OSX install notes are reproduced from Rick Hightower profile with permission of Rick Hightower.

Shell completion is a must while you are learning Kubernetes.

kubectl: shell completion

Shell Autocompletion set up guide

In 2019, Apple announced that macOS Catalina would now use Zsh (Z Shell) as the default shell, replacing bash. The zsh extends Bourne shell has improvements with features of Bash, ksh, and tcsh.

Add autoload -Uz compinit; compinit; source <(kubectl completion zsh) to .zshrc if you are using bash still follow the instructions at shell Autocompletion set up guide just use the bash tab on that web page.

Continue reading

Set up Kubernetes on Mac: Minikube, Helm, etc.

Set up docker, k8s and helm on a Mac Book Pro (OSX)

These k8s/helm/OSX install notes are reproduced from Rick Hightower with permission of Rick Hightower.

Install docker

 brew install docker

Install docker desktop for Mac.

Minikube

Use the version of k8s that the stable version of helm can use.

Install minikube

brew cask install minikube

Install hyperkit

brew install hyperkit

Run minikube that is compatible with the last stable helm relesae running on hyperkit

minikube start --kubernetes-version v1.15.4 --vm-driver=hyperkit --cpus=4 --disk-size='100000mb' --memory='6000mb'
  • minikube
  • start start mini kube
  • Use k8s compatible with helm 2 --kubernetes-version v1.15.4
  • Use the hyper kit driver --vm-driver=hyperkit
  • Use 4 of 8 virtual cores (MacBook Pro comes with 16 virtual cores and 8 cores) --cpus=4
  • Allocate 10GB of disk space --disk-size='100000mb' (Might need more
  • Use 6 GB of memory --memory='6000mb'

Helm Install

Install helm or just follow this guide on Helm install on a Mac.

Continue reading

Kafka Consumer: Advanced Consumers

Kafka Tutorial 14: Creating Advanced Kafka Consumers in Java - Part 1

In this tutorial, you are going to create advanced Kafka Consumers.

Before you start

The prerequisites to this tutorial are

Welcome to the first article on Advanced Kafka Consumers.

In this article, we are going to set up an advanced Kafka Consumer.

Continue reading

Kafka Tutorial

Kafka Tutorial

This comprehensive Kafka tutorial covers Kafka architecture and design. The Kafka tutorial has example Java Kafka producers and Kafka consumers. The Kafka tutorial also covers Avro and Schema Registry.

Kafka Training - Onsite, Instructor-led

Training for DevOps, Architects and Developers

This Kafka course teaches the basics of the Apache Kafka distributed streaming platform. The Apache Kafka distributed streaming platform is one of the most powerful and widely used reliable streaming platforms. Kafka is a fault tolerant, highly scalable and used for log aggregation, stream processing, event sources and commit logs. Kafka is used by LinkedIn, Yahoo, Twitter, Square, Uber, Box, PayPal, Etsy and more to enable stream processing, online messaging, facilitate in-memory computing by providing a distributed commit log, data collection for big data and so much more.

Continue reading

Kafka Tutorial: Creating Advanced Kafka Producers in Java

Kafka Tutorial 13: Creating Advanced Kafka Producers in Java

In this tutorial, you are going to create advanced Kafka Producers.

Before you start

The prerequisites to this tutorial are

This tutorial picks up right where Kafka Tutorial Part 11: Writing a Kafka Producer example in Java and Kafka Tutorial Part 12: Writing a Kafka Consumer example in Java left off. In the last two tutorial, we created simple Java example that creates a Kafka producer and a consumer.

Continue reading

Kafka Consulting

Kafka Consulting

Learn how to employ best practices for your teams’ Kafka deployments. Cloudurable has a range of consulting services and training to help you get the most out of Kafka from architecture to help with setting up health checks.

If you are new to Apache Kafka, Cloudurable has mentoring, consulting, and training to help you get the most of the Kafka streaming data platform. Our professional services teams can support your team designing a real-time streaming platform using Kafka. We can work shoulder to shoulder with your DevOps, Ops, and development team. Learn about the best practices, trade-offs, and avoid costly pitfalls to ensure a successful Kafka deployment.

Continue reading

Kafka Tutorial: Creating Advanced Kafka Consumers in Java

Kafka Tutorial 14: Creating Advanced Kafka Consumers in Java

In this tutorial, you are going to create advanced Kafka Consumers.

UNDER CONSTRUCTION.

Before you start

The prerequisites to this tutorial are

This tutorial picks up right where Kafka Tutorial Part 11: Writing a Kafka Producer example in Java left off. In the last tutorial, we created advanced Java producers, now we will do the same with Consumers.

Continue reading

Kafka Architecture: Log Compaction

Kafka Architecture: Log Compaction

This post really picks off from our series on Kafka architecture which includes Kafka topics architecture, Kafka producer architecture, Kafka consumer architecture and Kafka ecosystem architecture.

This article is heavily inspired by the Kafka section on design around log compaction. You can think of it as the cliff notes about Kafka design around log compaction.

Kafka can delete older records based on time or size of a log. Kafka also supports log compaction for record key compaction. Log compaction means that Kafka will keep the latest version of a record and delete the older versions during a log compaction.

Continue reading

Kafka Architecture: Low Level

If you are not sure what Kafka is, see What is Kafka?.

Kafka Architecture: Low-Level Design

This post really picks off from our series on Kafka architecture which includes Kafka topics architecture, Kafka producer architecture, Kafka consumer architecture and Kafka ecosystem architecture.

This article is heavily inspired by the Kafka section on design. You can think of it as the cliff notes.


Kafka Design Motivation

LinkedIn engineering built Kafka to support real-time analytics. Kafka was designed to feed analytics system that did real-time processing of streams. LinkedIn developed Kafka as a unified platform for real-time handling of streaming data feeds. The goal behind Kafka, build a high-throughput streaming data platform that supports high-volume event streams like log aggregation, user activity, etc.

Continue reading

                                                                           

Apache Spark Training
Kafka Tutorial
Akka Consulting
Cassandra Training
AWS Cassandra Database Support
Kafka Support Pricing
Cassandra Database Support Pricing
Non-stop Cassandra
Watchdog
Advantages of using Cloudurable™
Cassandra Consulting
Cloudurable™| Guide to AWS Cassandra Deploy
Cloudurable™| AWS Cassandra Guidelines and Notes
Free guide to deploying Cassandra on AWS
Kafka Training
Kafka Consulting
DynamoDB Training
DynamoDB Consulting
Kinesis Training
Kinesis Consulting
Kafka Tutorial PDF
Kubernetes Security Training
Redis Consulting
Redis Training
ElasticSearch / ELK Consulting
ElasticSearch Training
InfluxDB/TICK Training TICK Consulting