October 28, 2019
Set up docker, k8s and helm on a Mac Book Pro (OSX)
These k8s/helm/OSX install notes are reproduced from Rick Hightower with permission of Rick Hightower.
Install docker
brew install docker
Install docker desktop for Mac.
Minikube
Use the version of k8s that the stable version of helm can use.
Install minikube
brew cask install minikube
Install hyperkit
brew install hyperkit
Run minikube that is compatible with the last stable helm relesae running on hyperkit
minikube start --kubernetes-version v1.15.4 --vm-driver=hyperkit --cpus=4 --disk-size='100000mb' --memory='6000mb'
- minikube
start
start mini kube- Use k8s compatible with helm 2
--kubernetes-version v1.15.4
- Use the hyper kit driver
--vm-driver=hyperkit
- Use 4 of 8 virtual cores (MacBook Pro comes with 16 virtual cores and 8 cores)
--cpus=4
- Allocate 10GB of disk space
--disk-size='100000mb'
(Might need more - Use 6 GB of memory
--memory='6000mb'
Helm Install
Install helm or just follow this guide on Helm install on a Mac.
Use brew to install helm on the mac
brew install kubernetes-helm
Note for production use cases or even shared public cloud, you must follow the guidelines to secure helm.
Install helm into your local dev cluster
helm init --history-max 200
Install the minikube addons
minikube addons enable ingress
✅ ingress was successfully enabled
minikube addons enable efk
✅ efk was successfully enabled
minikube addons enable logviewer
✅ logviewer was successfully enabled
Note Before you install any services with helm ensure tiller image has downloaded
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-s5mt9 1/1 Running 0 5m9s
coredns-5c98db65d4-svhm4 1/1 Running 0 5m9s
elasticsearch-logging-g28fp 0/1 Init:0/1 0 5m8s
etcd-minikube 1/1 Running 0 4m17s
fluentd-es-km4fw 0/1 ContainerCreating 0 5m8s
kibana-logging-n482d 0/1 ContainerCreating 0 5m8s
kube-addon-manager-minikube 1/1 Running 0 4m13s
kube-apiserver-minikube 1/1 Running 0 4m12s
kube-controller-manager-minikube 1/1 Running 0 4m2s
kube-proxy-7qwjc 1/1 Running 0 5m9s
kube-scheduler-minikube 1/1 Running 0 3m55s
logviewer-8664c4bdcd-n9bks 0/1 ContainerCreating 0 5m7s
nginx-ingress-controller-778fcbd96-57g4k 0/1 ContainerCreating 0 5m7s
storage-provisioner 1/1 Running 0 5m8s
tiller-deploy-7544fc765f-rfcw6 0/1 ContainerCreating 0 101s
Images have to be in the running state. Notice that fluentd
and tiller
are not running.
To look at logs inside of minikube
$ minikube ssh
$ cd /var/log
$ pwd
/var/log
$ sudo tail -f **/*.log
The look at logs in *.log.
Once Tiller is up you can run this:
$ kubectl logs -n kube-system --follow tiller-deploy-7544fc765f-rfcw6
[main] 2019/10/23 14:30:13 Starting Tiller v2.15.0 (tls=false)
[main] 2019/10/23 14:30:13 GRPC listening on :44134
[main] 2019/10/23 14:30:13 Probes listening on :44135
[main] 2019/10/23 14:30:13 Storage driver is ConfigMap
[main] 2019/10/23 14:30:13 Max history per release is 200
Install prometheus
$ helm install stable/prometheus
Install Jenkins
$ helm install stable/jenkins
Verify installs
System pods
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-s5mt9 1/1 Running 0 5h41m
coredns-5c98db65d4-svhm4 1/1 Running 0 5h41m
elasticsearch-logging-g28fp 1/1 Running 0 5h41m ## EFK Minikube addon
etcd-minikube 1/1 Running 0 5h40m
fluentd-es-km4fw 1/1 Running 0 5h41m ## EFK Minikube addon
kibana-logging-n482d 1/1 Running 0 5h41m ## EFK Minikube addon
kube-addon-manager-minikube 1/1 Running 0 5h40m
kube-apiserver-minikube 1/1 Running 0 5h40m
kube-controller-manager-minikube 1/1 Running 0 5h40m
kube-proxy-7qwjc 1/1 Running 0 5h41m
kube-scheduler-minikube 1/1 Running 0 5h40m
logviewer-8664c4bdcd-n9bks 1/1 Running 0 5h41m ## logviewer Minikube addon
nginx-ingress-controller-778fcbd96-57g4k 1/1 Running 0 5h41m ## EFK Nginx Ingress Controller addon
storage-provisioner 1/1 Running 0 5h41m
tiller-deploy-7544fc765f-rfcw6 1/1 Running 0 5h38m ## HELM
Other pods
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
oldfashioned-whippet-prometheus-alertmanager-584cb4f5fd-7bbgh 2/2 Running 0 17m ## Helm Prometheus
oldfashioned-whippet-prometheus-kube-state-metrics-d7655fdtkpkr 1/1 Running 0 17m ## Helm Prometheus
oldfashioned-whippet-prometheus-node-exporter-xnw5n 1/1 Running 0 17m ## Helm Prometheus
oldfashioned-whippet-prometheus-pushgateway-6f7494c5c-sqzz7 1/1 Running 0 17m ## Helm Prometheus
oldfashioned-whippet-prometheus-server-664ff75f66-xtzjn 2/2 Running 0 17m ## Helm Prometheus
wrinkled-marsupial-jenkins-744884567-pqgx7 1/1 Running 0 12m ## Helm Jenkins
Run minikube
Run the minikube dashboard and see the deployments in the UI.
$ minikube dashboard
notes
Prometheus notes (yours will vary).
$ helm install stable/prometheus
NAME: oldfashioned-whippet
LAST DEPLOYED: Wed Oct 23 15:30:16 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
oldfashioned-whippet-prometheus-alertmanager 1 1s
oldfashioned-whippet-prometheus-server 3 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
oldfashioned-whippet-prometheus-alertmanager Bound pvc-46dc0345-9c09-413c-8476-256eb7a3f8e5 2Gi RWO standard 1s Filesystem
oldfashioned-whippet-prometheus-server Bound pvc-5cc0d588-a7fc-44d5-93ec-9578fafd9d00 8Gi RWO standard 1s Filesystem
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
oldfashioned-whippet-prometheus-alertmanager-584cb4f5fd-7bbgh 0/2 Pending 0 0s
oldfashioned-whippet-prometheus-kube-state-metrics-d7655fdtkpkr 0/1 ContainerCreating 0 0s
oldfashioned-whippet-prometheus-node-exporter-xnw5n 0/1 ContainerCreating 0 0s
oldfashioned-whippet-prometheus-pushgateway-6f7494c5c-sqzz7 0/1 ContainerCreating 0 0s
oldfashioned-whippet-prometheus-server-664ff75f66-xtzjn 0/2 Pending 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oldfashioned-whippet-prometheus-alertmanager ClusterIP 10.101.234.84 <none> 80/TCP 1s
oldfashioned-whippet-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 1s
oldfashioned-whippet-prometheus-node-exporter ClusterIP None <none> 9100/TCP 1s
oldfashioned-whippet-prometheus-pushgateway ClusterIP 10.105.193.19 <none> 9091/TCP 1s
oldfashioned-whippet-prometheus-server ClusterIP 10.110.136.5 <none> 80/TCP 1s
==> v1/ServiceAccount
NAME SECRETS AGE
oldfashioned-whippet-prometheus-alertmanager 1 1s
oldfashioned-whippet-prometheus-kube-state-metrics 1 1s
oldfashioned-whippet-prometheus-node-exporter 1 1s
oldfashioned-whippet-prometheus-pushgateway 1 1s
oldfashioned-whippet-prometheus-server 1 1s
==> v1beta1/ClusterRole
NAME AGE
oldfashioned-whippet-prometheus-alertmanager 1s
oldfashioned-whippet-prometheus-kube-state-metrics 1s
oldfashioned-whippet-prometheus-pushgateway 1s
oldfashioned-whippet-prometheus-server 1s
==> v1beta1/ClusterRoleBinding
NAME AGE
oldfashioned-whippet-prometheus-alertmanager 1s
oldfashioned-whippet-prometheus-kube-state-metrics 1s
oldfashioned-whippet-prometheus-pushgateway 1s
oldfashioned-whippet-prometheus-server 1s
==> v1beta1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
oldfashioned-whippet-prometheus-node-exporter 1 1 0 1 0 <none> 1s
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
oldfashioned-whippet-prometheus-alertmanager 0/1 1 0 0s
oldfashioned-whippet-prometheus-kube-state-metrics 0/1 1 0 1s
oldfashioned-whippet-prometheus-pushgateway 0/1 1 0 0s
oldfashioned-whippet-prometheus-server 0/1 1 0 1s
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
oldfashioned-whippet-prometheus-server.default.svc.cluster.local
Get the Prometheus server URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090
The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
oldfashioned-whippet-prometheus-alertmanager.default.svc.cluster.local
Get the Alertmanager URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9093
#################################################################################
###### WARNING: Pod Security Policy has been moved to a global property. #####
###### use .Values.podSecurityPolicy.enabled with pod-based #####
###### annotations #####
###### (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) #####
#################################################################################
The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
oldfashioned-whippet-prometheus-pushgateway.default.svc.cluster.local
Get the PushGateway URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9091
For more information on running Prometheus, visit:
https://prometheus.io/
Jenkins notes (yours will vary)
$ helm install stable/jenkins
NAME: wrinkled-marsupial
LAST DEPLOYED: Wed Oct 23 15:35:12 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
wrinkled-marsupial-jenkins 5 0s
wrinkled-marsupial-jenkins-tests 1 0s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
wrinkled-marsupial-jenkins 0/1 1 0 0s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
wrinkled-marsupial-jenkins Bound pvc-8158743d-25e6-4910-8afb-255eaf41160f 8Gi RWO standard 0s Filesystem
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
wrinkled-marsupial-jenkins-744884567-pqgx7 0/1 Init:0/1 0 0s
==> v1/Role
NAME AGE
wrinkled-marsupial-jenkins-schedule-agents 0s
==> v1/RoleBinding
NAME AGE
wrinkled-marsupial-jenkins-schedule-agents 0s
==> v1/Secret
NAME TYPE DATA AGE
wrinkled-marsupial-jenkins Opaque 2 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wrinkled-marsupial-jenkins LoadBalancer 10.98.189.23 <pending> 8080:31821/TCP 0s
wrinkled-marsupial-jenkins-agent ClusterIP 10.99.253.191 <none> 50000/TCP 0s
==> v1/ServiceAccount
NAME SECRETS AGE
wrinkled-marsupial-jenkins 1 0s
NOTES:
1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace default wrinkled-marsupial-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace default -w wrinkled-marsupial-jenkins'
export SERVICE_IP=$(kubectl get svc --namespace default wrinkled-marsupial-jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo http://$SERVICE_IP:8080/login
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
This wraps up some notes on how to install K8s on OSX with helm.
Related content
- What is Kafka?
- [Kafka Architecture](https://cloudurable.com/blog/kafka-architecture/index.html “This article discusses the structure of Kafka. Kafka consists of Records, Topics, Consumers, Producers, Brokers, Logs, Partitions, and Clusters. Records can have key, value and timestamp. Kafka Records are immutable. A Kafka Topic is a stream of records - “/orders”, “/user-signups”. You can think of a Topic as a feed name. It covers the structure of and purpose of topics, log, partition, segments, brokers, producers, and consumers”)
- Kafka Topic Architecture
- Kafka Consumer Architecture
- Kafka Producer Architecture
- Kafka Architecture and low level design
- Kafka and Schema Registry
- Kafka and Avro
- Kafka Ecosystem
- Kafka vs. JMS
- Kafka versus Kinesis
- Kafka Tutorial: Using Kafka from the command line
- Kafka Tutorial: Kafka Broker Failover and Consumer Failover
- Kafka Tutorial
- Kafka Tutorial: Writing a Kafka Producer example in Java
- Kafka Tutorial: Writing a Kafka Consumer example in Java
- Kafka Architecture: Log Compaction
- Kafka Architecture: Low-Level PDF Slides
About Cloudurable
We hope you enjoyed this article. Please provide feedback. Cloudurable provides Kafka training, Kafka consulting, Kafka support and helps setting up Kafka clusters in AWS.
Check out our new GoLang course. We provide onsite Go Lang training which is instructor led.
Learn more:
TweetApache Spark Training
Kafka Tutorial
Akka Consulting
Cassandra Training
AWS Cassandra Database Support
Kafka Support Pricing
Cassandra Database Support Pricing
Non-stop Cassandra
Watchdog
Advantages of using Cloudurable™
Cassandra Consulting
Cloudurable™| Guide to AWS Cassandra Deploy
Cloudurable™| AWS Cassandra Guidelines and Notes
Free guide to deploying Cassandra on AWS
Kafka Training
Kafka Consulting
DynamoDB Training
DynamoDB Consulting
Kinesis Training
Kinesis Consulting
Kafka Tutorial PDF
Kubernetes Security Training
Redis Consulting
Redis Training
ElasticSearch / ELK Consulting
ElasticSearch Training
InfluxDB/TICK Training TICK Consulting