Kubernetes The Hard Way
Stevens Big Big Journey through this
Github Repo
by Kelsey Hightower.
- Proxmox Templates with Cloud-init
- Provisioning Resources with Terraform
- Jumpbox Setup
- Compute Resources
- Configuring the Certificate Authority
- Kubernetes Configuration Files
- Generating the Data Encryption Config and Key
- Bootstrapping the etcd Cluster
- Bootstrapping the Kubernetes Control Plane
- Bootstrapping the Kubernetes Worker Nodes
- Configuring kubectl for Remote Access
- Provisioning Pod Network Routes
- Smoke Test
Proxmox Templates with Cloud-init
Intro
Templates gives the ability to clone a pre-configured virtual machine,
saving time and also giving the ability to automate.
For these notes, the current Proxmox version is 8.2.4 and the template will be based on the current Ubuntu LTS, 24.04 Noble Numbat.
Creating Bare Bones VM
Create a normal virtual machine, keep the VM ID noted for
later. |
|
Don't use any media because the Ubuntu Cloud image will be already
configured, we will need to import that later in the command
line. |
|
Default selections will work fine. Optionally check the QEMU agent
box. |
|
The same idea with the OS tab, we don't want any disks attached as we will grab a Ubuntu Cloud image and import it later. | |
Leave this as default, when cloning this template, we will be able to configure it as needed. | |
Leave this as default, when cloning this template, we will be able to configure it as needed. | |
Select the appropriate network bridge. Optionally disable the firewall. | |
Final configuration for the VM.
|
Configuring Cloud-init
Under the newly create VM, select Cloud-init. | |
Add a Cloud-init drive, the default settings will work. |
|
Set a username and password for the default user. This will be the
configured user for when the VM initializes. |
|
Optionally add in a SSH key. | |
Optionally select DHCP for IPv4, since it's easier to manage the leases on a router. | |
Final Cloud-init configuration. |
Attaching the Image
You can download the Ubuntu Cloud Image through the GUI or the terminal.
Here I will use the Wget command to download it.
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
If the file is downloaded through the GUI, just change the directory to what is shown below to find the downloaded file.
#cd /var/lib/vz/template/iso #default location
cd /mnt/pve/Peanut-ISO/template/iso
To get an output from the VM, we need to create a serial console and
attach it to the VM.
#qm set <id> --serial0 socket --vga serial0
qm set 888 --serial0 socket --vga serial0
Change the .img
file extension to .qcow2
.
#mv <file name>.img <file name>.qcow2
mv noble-server-cloudimg-amd64.img noble-server-cloudimg-amd64.qcow2
Resize the image to a desired size for the VM.
#qemu-img resize <image> <size>
qemu-img resize noble-server-cloudimg-amd64.qcow2 24G
Import the disk to the VM, we will attach it through the GUI.
#qm importdisk <id> <cloud image> <vm disk storage>
qm importdisk 888 noble-server-cloudimg-amd64.qcow2 local-zfs
Going back to the GUI, we see that there is now an unused disk. |
|
Edit the unused disk. If VM storage is on an SSD, check Discard
and SSD emulation. Add the disk to the VM. |
|
Move to the Options section. |
|
Enable the image as an boot option by checking the box, then move
it up so that the VM will boot from the image. |
|
Don't start the VM. Convert it to a template first. Top right under "More" -> "Convert to template"
After converting to a template, the option to start the VM disappears.
Testing the Template
To test if the template works. Top right under "More" -> Clone
Set the Mode to Full Clone. Linked Clones take up less space but require the template to exist.
After starting the VM, there should be a console output. The VM will check for a Cloud-init drive and configure itself with the options set earlier.
SSH should work with the configured key once the VM gets a DHCP lease. If the QEMU agent box was ticked from earlier, install the QEMU agent.
Provisioning Resources with Terraform
Intro
While Terraform is usually used to provision resources in the cloud, I wanted to play around with Infrastructure as Code (IaC) without the need to create an account with a cloud provider. While Terraform does not have an official Proxmox provider, there is a community created one by Telmate. These notes will cover the provisioning of required prerequisites from the Kubernetes The Hard Way repository.
Getting the Terraform Binaries
Terraform provides installation options from their docs.
Installation will vary across operating systems, but all the commands
should be the same. For these notes, Terraform will be installed on a
Windows 11 Host.
Creating a Proxmox Terraform User
It is possible to just use the current root user for Terraform, but creating another user and giving it the minimum required roles is more secure.
Creating the user can be done either through the GUI or the command line. Since the user will require a lot of privileges, it will be easier to run this in the command line.
To create the role:
pveum role add TerraformUser -privs "Datastore.AllocateSpace Datastore.AllocateTemplate Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt SDN.Use"
To create the user:
pveum user add terraform-user@pve --password <password>
To assign the role to the user:
pveum aclmod / -user terraform-user@pve -role TerraformUser
Optional: Create an API Token for the user:
Example:
Token ID: terraform-user@pve!TF
Secret: 12345678-1234-1a2a-cde3-456fgh78ij90
Configuring the Proxmox Provider
Providers allow Terraform to interact with cloud providers, SaaS providers, and other APIs. - Terraform Docs
To add the Proxmox provider, create a provider.tf
file.
Example template is shown below.
terraform {
required_providers {
proxmox = {
source = "Telmate/proxmox"
version = "3.0.1-rc3"
}
}
}
provider "proxmox" {
pm_api_url = "https://coconut.stevenchen.one/api2/json"
# pm_user = "terraform-user@pve"
# pm_password = "SecurePassword123!!!"
# pm_api_token_id = "terraform-user@pve!TF"
# pm_api_token_secret = "12345678-1234-1a2a-cde3-456fgh78ij90"
}
The version string is currently set to 3.0.1-rc3 because the stable release does not support Proxmox 8. It would probably be recommended to change the = to a >=.
As per Telmate's docs, for authentication, it is possible to use Username/Password or an API Token/Secret. Just pick one to use.
Also noted on the
docs, if you don't want to hardcode the credentials in the file, the use of
environment variables in the current terminal instance is possible by
using PM_USER/PM_PASS or PM_API_TOKEN_ID/PM_API_TOKEN_SECRET
Run a terraform init
to initialize Terraform and pull the
provider from the registry.
Creating the Main Configuration File
Extra configuration options can be found in Telmate's repository. Some notes on the configuration shown below:
- The template created from my other guide sets the default values, so if the values are not configured in the TF file, it will take the ones from the template.
- The disks seems to need to match the ones in the template, scsi0 and ide0 are created to match the Cloud-init and image drive from the template.
- The template has a default scsi0 size of 24G, when I set the size in the TF file to something lower than that, it wasn't able to find a boot image on startup.
- The iothread option only works with virtio-scsi-single. Discard and emulatessd should only be enabled if the VM storage is on an SSD.
I have agent = 1, which enables the QEMU Guest Agent for the VM. When applying the configuration, TF doesn't seem to complete until the agent is installed. (While it is probably better to install the package through Cloud-init, I'm going to use Ansible to install it)
resource "proxmox_vm_qemu" "kubernetes-worker-node" {
target_node = "coconut"
desc = "Kubernetes worker node"
count = 2
onboot = false
clone = "ubuntu-2404-template"
agent = 1
os_type = "cloud-init"
cores = 2
sockets = 1
numa = false
vcpus = 0
cpu = "host"
memory = 2048
name = "node-${count.index}"
scsihw = "virtio-scsi-single"
bootdisk = "scsi0"
ipconfig0 = "ip=dhcp"
network {
model = "virtio"
bridge = "vmbr0"
}
disks {
ide {
ide0 {
cloudinit {
storage = "local-zfs"
}
}
}
scsi {
scsi0 {
disk {
storage = "local-zfs"
size = 24
discard = true
emulatessd = true
iothread = true
}
}
}
}
}
resource "proxmox_vm_qemu" "kubernetes-server" {
target_node = "coconut"
desc = "Kubernetes server"
count = 1
onboot = false
clone = "ubuntu-2404-template"
agent = 1
os_type = "cloud-init"
cores = 2
sockets = 1
numa = false
vcpus = 0
cpu = "host"
memory = 2048
name = "server"
scsihw = "virtio-scsi-single"
bootdisk = "scsi0"
ipconfig0 = "ip=dhcp"
network {
model = "virtio"
bridge = "vmbr0"
}
disks {
ide {
ide0 {
cloudinit {
storage = "local-zfs"
}
}
}
scsi {
scsi0 {
disk {
storage = "local-zfs"
size = 24
discard = true
emulatessd = true
iothread = true
}
}
}
}
}
resource "proxmox_vm_qemu" "administration-host" {
target_node = "coconut"
desc = "Administration host"
count = 1
onboot = false
clone = "ubuntu-2404-template"
agent = 1
os_type = "cloud-init"
cores = 1
sockets = 1
numa = false
vcpus = 0
cpu = "host"
memory = 1024
name = "jumpbox"
scsihw = "virtio-scsi-single"
bootdisk = "scsi0"
ipconfig0 = "ip=dhcp"
network {
model = "virtio"
bridge = "vmbr0"
}
disks {
ide {
ide0 {
cloudinit {
storage = "local-zfs"
}
}
}
scsi {
scsi0 {
disk {
storage = "local-zfs"
size = 24
discard = true
emulatessd = true
iothread = true
}
}
}
}
}
Run terraform plan
to see the proposed changes, then
terraform apply
to apply the changes.
Ansible Playbook for QEMU Guest Agent
As mentioned earlier, I will be using Ansible to install the guest agent. From the template guide, I already provided a SSH Key, which I will be using to remote in.
On the a separate host, I'll install Ansible with pip:
pipx install --include-deps ansible && pipx ensurepath
Other installation methods are posted on the Ansible docs.
Create an inventory file (inventory.yml
) for the 4 hosts
that were just provisioned:
kubernetes-worker-node:
hosts:
10.0.10.202:
10.0.10.203:
kubernetes-server:
hosts:
10.0.10.201:
administration-host:
hosts:
10.0.10.200:
Run a quick ping test to see if all hosts are reachable:
ansible -m ping all
Run this Ansible playbook to configure QEMU Guest Agent:
- name: Install and Enable Agent
hosts: kubernetes-worker-node:kubernetes-server:administration-host
become: yes
tasks:
- name: Install Package
ansible.builtin.package:
name: qemu-guest-agent
state: present
- name: Enable Service
ansible.builtin.service:
name: qemu-guest-agent
state: started
Jumpbox Setup
Intro
This section will cover 02-jumpbox of Kubernetes the Hard Way. To practice some basic Ansible, I will convert the steps in the repository into a playbook.
The docs require these steps:
- Install Command Line Utilities
- Sync GitHub Repository
- Download Binaries
- Install kubectl
I created an Ansible playbook to do those steps (for practice).
Kubernetes The Hard Way was written for ARM based systems, While I could have emulated ARM with QEMU, I chose to run AMD64 based virtual machines. This changes the downloads.txt file's contents as I will need to change the links from ARM.
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kubectl
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kube-apiserver
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kube-controller-manager
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kube-scheduler
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-amd64.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64
https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
https://github.com/containerd/containerd/releases/download/v1.7.8/containerd-1.7.8-linux-amd64.tar.gz
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kube-proxy
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/amd64/kubelet
https://github.com/etcd-io/etcd/releases/download/v3.4.27/etcd-v3.4.27-linux-amd64.tar.gz
Ansible Playbook
Notes:
-
Need to use
become: yes
to get the privileges to install packages with apt. -
I needed to add the step:
Get Remote File Contents
so that Ansible would read the file on the remote node instead of the local node.
- name: Setup Jumpbox
hosts: administration-host
tasks:
- name: Install CLI Utils
become: yes
ansible.builtin.package:
name:
- wget
- curl
- vim
- openssl
- git
state: present
- name: Clone Repo
ansible.builtin.git:
repo: https://github.com/kelseyhightower/kubernetes-the-hard-way.git
depth: 1
dest: "{{ ansible_env.HOME }}/kubernetes-the-hard-way"
single_branch: yes
version: master
- name: Create Downloads Dir
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/kubernetes-the-hard-way/downloads"
state: directory
- name: Get Remote File Contents (Downloads.txt)
ansible.builtin.command: "cat {{ ansible_env.HOME }}/kubernetes-the-hard-way/downloads.txt"
register: urls
- name: Download Binaries
ansible.builtin.get_url:
url: "{{ item }}"
dest: "{{ ansible_env.HOME }}/kubernetes-the-hard-way/downloads/"
loop: "{{ urls.stdout.splitlines() }}"
- name: Add Execution to kubectl
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/kubernetes-the-hard-way/downloads/kubectl"
mode: a+x
- name: Copy kubectl to Binaries Dir
become: yes
ansible.builtin.command: "cp {{ ansible_env.HOME }}/kubernetes-the-hard-way/downloads/kubectl /usr/local/bin/"
Compute Resources
Intro
This section will cover 03-compute-resources of Kubernetes the Hard Way.
The docs require these steps:
- Configure Machine Database
- Configuring SSH Access
- Configuring DNS
Since I have a local domain setup in my network (local.lan). I'm able to ping each machine via their hostname already, I'll still run through adding each host entries manually in their respective machine's host file.
Machine Database
The first step is to create a text file that will contain the machine
attributes for later setup. It will follow this schema:
IPV4_ADDRESS FQDN HOSTNAME POD_SUBNET
Mine will look like this:
10.0.10.201 server.local.lan server
10.0.10.202 node-0.local.lan node-0 10.200.0.0/24
10.0.10.203 node-1.local.lan node-1 10.200.1.0/24
Configuring SSH
The guide uses a Debian install while I'm using Ubuntu, but both have
PermitRootLogin
disabled in the SSH config.
To enable root SSH, we have to modify the
/etc/ssh/sshd_config
file. While the docs give a command to
replace the string in the config file, I'll write it in Ansible to
practice.
- name: Enable PermitRootLogin in hosts
hosts: kubernetes-worker-node:kubernetes-server
become: yes
tasks:
- name: Set PermitRootLogin to yes
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin yes'
state: present
backup: yes
- name: Restart SSH service
ansible.builtin.service:
name: ssh
state: restarted
This playbook generates the key on the jumpbox, then adds itself as
authorized keys on the other 3 machines.
- name: Generate and get ED25519 key
become: yes
hosts: administration-host
tasks:
- name: Check if key exists already
ansible.builtin.stat:
path: /root/.ssh/id_ed25519
register: ssh_key
- name: Generate key
ansible.builtin.command: "ssh-keygen -t ed25519 -f /{{ ansible_env.HOME }}/.ssh/id_ed25519 -N ''"
when: not ssh_key.stat.exists
- name: Read the pub key
ansible.builtin.slurp:
src: "/{{ ansible_env.HOME }}/.ssh/id_ed25519.pub"
register: pub_key
- name: Store var
set_fact:
pub_key: "{{ pub_key.content | b64decode }}"
delegate_facts: true
delegate_to: localhost
- name: Copy key to other hosts
become: yes
hosts: kubernetes-server:kubernetes-worker-node
tasks:
- name: Create .ssh directory if it doesn't exist
ansible.builtin.file:
path: "/{{ ansible_env.HOME }}/.ssh"
state: directory
mode: '0700'
- name: Add key to authorized hosts
ansible.builtin.lineinfile:
path: "/{{ ansible_env.HOME }}/.ssh/authorized_keys"
line: "{{ hostvars['localhost']['pub_key'] }}"
create: yes
The docs provide a script to verify that the jumpbox is able to ssh into the root user.
while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} uname -o -m
done < machines.txt
The output should look something like this:
x86_64 GNU/Linux
x86_64 GNU/Linux
x86_64 GNU/Linux
Configuring DNS
As I stated earlier, my network already has a local domain set. This
allows me to ping each host without the need to edit the hosts file on
each machine. I'll list the basic outline below, a more detailed
approach is found in
Kelsey's Docs.
touch hosts
while read IP FQDN HOST SUBNET; do
ENTRY="${IP} ${FQDN} ${HOST}"
echo $ENTRY >> hosts
done < machines.txt
cat hosts >> /etc/hosts
The jumpbox should now be able to identify the machines using their hostname instead of their IPs.
for host in server node-0 node-1
do ssh root@${host} uname -o -m -n
done
The below script will copy the created host file on the jumpbox to the other machines, then add them to their respective hosts files.
while read IP FQDN HOST SUBNET; do
scp hosts root@${HOST}:~/
ssh -n \
root@${HOST} "cat hosts >> /etc/hosts"
done < machines.txt
Configuring the Certificate Authority
Intro
This section will cover 04-certificate-authority of Kubernetes the Hard Way.
The repository provides the configuration file for openssl. The host
names for all the machines already match what is given, so we just need
to match the DNS entries. My modified entry is at the bottom. The below
commands are copied from the guide. These commands should be run on the
jumpbox.
Generating the Certificates and Private Keys
Since we are using the ca.conf
file provided in the
repository, we just need to run this command to generate the root
certificate.
openssl genrsa -out ca.key 4096
openssl req -x509 -new -sha512 -noenc \
-key ca.key -days 3653 \
-config ca.conf \
-out ca.crt
Generate all the client certificates.
certs=(
"admin" "node-0" "node-1"
"kube-proxy" "kube-scheduler"
"kube-controller-manager"
"kube-api-server"
"service-accounts"
)
for i in ${certs[*]}; do
openssl genrsa -out "${i}.key" 4096
openssl req -new -key "${i}.key" -sha256 \
-config "ca.conf" -section ${i} \
-out "${i}.csr"
openssl x509 -req -days 3653 -in "${i}.csr" \
-copy_extensions copyall \
-sha256 -CA "ca.crt" \
-CAkey "ca.key" \
-CAcreateserial \
-out "${i}.crt"
done
Copy the generated client certificates to their respective hosts.
for host in node-0 node-1; do
ssh root@$host mkdir /var/lib/kubelet/
scp ca.crt root@$host:/var/lib/kubelet/
scp $host.crt \
root@$host:/var/lib/kubelet/kubelet.crt
scp $host.key \
root@$host:/var/lib/kubelet/kubelet.key
done
scp \
ca.key ca.crt \
kube-api-server.key kube-api-server.crt \
service-accounts.key service-accounts.crt \
root@server:~/
Modified Configuration File
[req]
distinguished_name = req_distinguished_name
prompt = no
x509_extensions = ca_x509_extensions
[ca_x509_extensions]
basicConstraints = CA:TRUE
keyUsage = cRLSign, keyCertSign
[req_distinguished_name]
C = US
ST = Texas
L = Austin
CN = CA
[admin]
distinguished_name = admin_distinguished_name
prompt = no
req_extensions = default_req_extensions
[admin_distinguished_name]
CN = admin
O = system:masters
# Service Accounts
#
# The Kubernetes Controller Manager leverages a key pair to generate
# and sign service account tokens as described in the
# [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/)
# documentation.
[service-accounts]
distinguished_name = service-accounts_distinguished_name
prompt = no
req_extensions = default_req_extensions
[service-accounts_distinguished_name]
CN = service-accounts
# Worker Nodes
#
# Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/)
# called Node Authorizer, that specifically authorizes API requests made
# by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet).
# In order to be authorized by the Node Authorizer, Kubelets must use a credential
# that identifies them as being in the `system:nodes` group, with a username
# of `system:node:<nodeName>`.
[node-0]
distinguished_name = node-0_distinguished_name
prompt = no
req_extensions = node-0_req_extensions
[node-0_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Node-0 Certificate"
subjectAltName = DNS:node-0, IP:127.0.0.1
subjectKeyIdentifier = hash
[node-0_distinguished_name]
CN = system:node:node-0
O = system:nodes
C = US
ST = Texas
L = Austin
[node-1]
distinguished_name = node-1_distinguished_name
prompt = no
req_extensions = node-1_req_extensions
[node-1_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Node-1 Certificate"
subjectAltName = DNS:node-1, IP:127.0.0.1
subjectKeyIdentifier = hash
[node-1_distinguished_name]
CN = system:node:node-1
O = system:nodes
C = US
ST = Texas
L = Austin
# Kube Proxy Section
[kube-proxy]
distinguished_name = kube-proxy_distinguished_name
prompt = no
req_extensions = kube-proxy_req_extensions
[kube-proxy_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Proxy Certificate"
subjectAltName = DNS:kube-proxy, IP:127.0.0.1
subjectKeyIdentifier = hash
[kube-proxy_distinguished_name]
CN = system:kube-proxy
O = system:node-proxier
C = US
ST = Texas
L = Austin
# Controller Manager
[kube-controller-manager]
distinguished_name = kube-controller-manager_distinguished_name
prompt = no
req_extensions = kube-controller-manager_req_extensions
[kube-controller-manager_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Controller Manager Certificate"
subjectAltName = DNS:kube-proxy, IP:127.0.0.1
subjectKeyIdentifier = hash
[kube-controller-manager_distinguished_name]
CN = system:kube-controller-manager
O = system:kube-controller-manager
C = US
ST = Texas
L = Austin
# Scheduler
[kube-scheduler]
distinguished_name = kube-scheduler_distinguished_name
prompt = no
req_extensions = kube-scheduler_req_extensions
[kube-scheduler_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Scheduler Certificate"
subjectAltName = DNS:kube-scheduler, IP:127.0.0.1
subjectKeyIdentifier = hash
[kube-scheduler_distinguished_name]
CN = system:kube-scheduler
O = system:system:kube-scheduler
C = US
ST = Texas
L = Austin
# API Server
#
# The Kubernetes API server is automatically assigned the `kubernetes`
# internal dns name, which will be linked to the first IP address (`10.32.0.1`)
# from the address range (`10.32.0.0/24`) reserved for internal cluster
# services.
[kube-api-server]
distinguished_name = kube-api-server_distinguished_name
prompt = no
req_extensions = kube-api-server_req_extensions
[kube-api-server_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Scheduler Certificate"
subjectAltName = @kube-api-server_alt_names
subjectKeyIdentifier = hash
[kube-api-server_alt_names]
IP.0 = 127.0.0.1
IP.1 = 10.32.0.1
DNS.0 = kubernetes
DNS.1 = kubernetes.default
DNS.2 = kubernetes.default.svc
DNS.3 = kubernetes.default.svc.cluster
DNS.4 = kubernetes.svc.cluster.local
DNS.5 = server.local.lan
DNS.6 = api-server.local.lan
[kube-api-server_distinguished_name]
CN = kubernetes
C = US
ST = Texas
L = Austin
[default_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Admin Client Certificate"
subjectKeyIdentifier = hash
Kubernetes Configuration Files
Intro
This section will cover 05-kubernetes-configuration-files of Kubernetes the Hard Way.
The below commands are copied from the guide. I have modified them slightly to match my domain. These commands should be run on the jumpbox.
Only thing I modified was the server argument. This applies to all the files below.
Kubelet Configuration File
for host in node-0 node-1; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.local.lan:6443 \
--kubeconfig=${host}.kubeconfig
kubectl config set-credentials system:node:${host} \
--client-certificate=${host}.crt \
--client-key=${host}.key \
--embed-certs=true \
--kubeconfig=${host}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${host} \
--kubeconfig=${host}.kubeconfig
kubectl config use-context default \
--kubeconfig=${host}.kubeconfig
done
Kube-proxy Configuration File
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.local.lan:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.crt \
--client-key=kube-proxy.key \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default \
--kubeconfig=kube-proxy.kubeconfig
Kube-controller-manager Configuration File
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.local.lan:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.crt \
--client-key=kube-controller-manager.key \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default \
--kubeconfig=kube-controller-manager.kubeconfig
Kube-scheduler Configuration File
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.local.lan:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.crt \
--client-key=kube-scheduler.key \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default \
--kubeconfig=kube-scheduler.kubeconfig
Admin Configuration File
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default \
--kubeconfig=admin.kubeconfig
Distributing the Configuration Files
List all the kubeconfig files generated with:
ls *.kubeconfig
There should be 6 files:
admin.kubeconfig kube-controller-manager.kubeconfig kube-proxy.kubeconfig kube-scheduler.kubeconfig node-0.kubeconfig node-1.kubeconfig
Copy the kubelet
and kube-proxy
to the 2
nodes:
for host in node-0 node-1; do
ssh root@$host "mkdir /var/lib/{kube-proxy,kubelet}"
scp kube-proxy.kubeconfig \
root@$host:/var/lib/kube-proxy/kubeconfig \
scp ${host}.kubeconfig \
root@$host:/var/lib/kubelet/kubeconfig
done
Copy the kube-controller-manager
and
kube-scheduler
kubeconfig files to the controller instance:
scp admin.kubeconfig \
kube-controller-manager.kubeconfig \
kube-scheduler.kubeconfig \
root@server:~/
Generating the Data Encryption Config and Key
Intro
This section will cover 06-data-encryption-keys of Kubernetes the Hard Way.
The below commands are copied from the guide. These commands should be run on the jumpbox.
Encryption Key and File Configuration
To generate the key:
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
It seems like the configuration file is missing from the repository. Solutions were found on this GitHub issue thread.
Create the configuration config file:
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
Copy the configuration to the controller:
scp encryption-config.yaml root@server:~/
Notes
The current docs have an alternate command shown below, this requires
the encryption config file to already exist with the ${ENCRYPTION_KEY}.
The command copy the configuration template and fill in the key.
envsubst < configs/encryption-config.yaml \
> encryption-config.yaml
The original contents of configs/encryption-config.yaml
is
shown below:
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
Bootstrapping the etcd Cluster
Intro
This section will cover 07-bootstrapping-etcd of Kubernetes the Hard Way.
The below commands are copied from the guide. These commands should be
run on the jumpbox.
etcd is an open source distributed key-value store used to hold and manage the critical information that distributed systems need to keep running. Most notably, it manages the configuration data, state data, and metadata for Kubernetes, the popular container orchestration platform. - IBM
Installation
As stated before, the guide is based on an ARM machine, but I am using an AMD64 machine. Update the file names as needed.
Copy the file from the jumpbox to the server:
scp \
downloads/etcd-v3.4.27-linux-amd64.tar.gz \
units/etcd.service \
root@server:~/
SSH into the server:
ssh root@server
Extract the contents from the file:
tar -xvf etcd-v3.4.27-linux-amd64.tar.gz
Move the files to the binaries directory:
mv etcd-v3.4.27-linux-amd64/etcd* /usr/local/bin/
Configure the etcd server:
mkdir -p /etc/etcd /var/lib/etcd
chmod 700 /var/lib/etcd
cp ca.crt kube-api-server.key kube-api-server.crt \
/etc/etcd/
Edit the service file:
vi etcd.service
# or use nano
As mentioned earlier, the guide is written for ARM64
We can just remove the line:
Environment="ETCD_UNSUPPORTED_ARCH=arm64"
[Unit]
Description=etcd
Documentation=https://github.com/etcd-io/etcd
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \
--name controller \
--initial-advertise-peer-urls http://127.0.0.1:2380 \
--listen-peer-urls http://127.0.0.1:2380 \
--listen-client-urls http://127.0.0.1:2379 \
--advertise-client-urls http://127.0.0.1:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster controller=http://127.0.0.1:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Add the Systemd service:
mv etcd.service /etc/systemd/system/
Start the etcd server:
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
Verify that it is running and correct:
etcdctl member list
Output should look something like this:
6702b0a34e2cfd39, started, controller, http://127.0.0.1:2380, http://127.0.0.1:2379, false
Bootstrapping the Kubernetes Control Plane
Intro
This section will cover 08-bootstrapping-kubernetes-controllers of Kubernetes the Hard Way.
The below commands are copied from the guide. These commands should be run on the jumpbox.
Control Plane Setup
Copy the files:
scp \
downloads/kube-apiserver \
downloads/kube-controller-manager \
downloads/kube-scheduler \
downloads/kubectl \
units/kube-apiserver.service \
units/kube-controller-manager.service \
units/kube-scheduler.service \
configs/kube-scheduler.yaml \
configs/kube-apiserver-to-kubelet.yaml \
root@server:~/
SSH into the server:
ssh root@server
Create the Kubernetes configuration directory:
mkdir -p /etc/kubernetes/config
Allow execution of Kubernetes binaries:
chmod +x kube-apiserver \
kube-controller-manager \
kube-scheduler kubectl
Move the Kubernetes binaries:
mv kube-apiserver \
kube-controller-manager \
kube-scheduler kubectl \
/usr/local/bin/
Create a directory for Kubernetes API server:
mkdir -p /var/lib/kubernetes/
Move the files over:
mv ca.crt ca.key \
kube-api-server.key kube-api-server.crt \
service-accounts.key service-accounts.crt \
encryption-config.yaml \
/var/lib/kubernetes/
Move the kube-apiserver
Systemd file:
mv kube-apiserver.service \
/etc/systemd/system/kube-apiserver.service
Move the kube-controller-manager
kubeconfig file:
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
Move the kube-controller-manager
Systemd file:
mv kube-controller-manager.service /etc/systemd/system/
Move the kube-scheduler
kubeconfig file:
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
Move the kube-scheduler
configuration file:
mv kube-scheduler.yaml /etc/kubernetes/config/
Move the kube-scheduler
Systemd file:
mv kube-scheduler.service /etc/systemd/system/
Start the controller services:
systemctl daemon-reload
systemctl enable kube-apiserver \
kube-controller-manager kube-scheduler
systemctl start kube-apiserver \
kube-controller-manager kube-scheduler
Verify that it is running and correct:
kubectl cluster-info \
--kubeconfig admin.kubeconfig
Output should look something like this:
Kubernetes control plane is running at https://127.0.0.1:6443
Kubernetes Role-Based Access Control
This command will be executed by the server.
Create the system:kube-apiserver-to-kubelet
role:
kubectl apply -f kube-apiserver-to-kubelet.yaml \
--kubeconfig admin.kubeconfig
Verify from the jumpbox:
Change the domain as needed.
curl -k --cacert ca.crt https://server.local.lan:6443/version
Output should look something like this:
{
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"compiler": "gc",
"platform": "linux/amd64"
}
Bootstrapping the Kubernetes Worker Nodes
Intro
This section will cover 09-bootstrapping-kubernetes-workers.
The below commands are copied from the guide. These commands should be run on the jumpbox.
Prerequisites
Fill in and copy networking configuration to workers:
This uses the machines.txt file that was previously generated.
for host in node-0 node-1; do
SUBNET=$(grep $host machines.txt | cut -d " " -f 4)
sed "s|SUBNET|$SUBNET|g" \
configs/10-bridge.conf > 10-bridge.conf
sed "s|SUBNET|$SUBNET|g" \
configs/kubelet-config.yaml > kubelet-config.yaml
scp 10-bridge.conf kubelet-config.yaml \
root@$host:~/
done
Copy binaries and Systemd files to workers:
ARM64 -> AMD64 file names again.
for host in node-0 node-1; do
scp \
downloads/runc.amd64 \
downloads/crictl-v1.28.0-linux-amd64.tar.gz \
downloads/cni-plugins-linux-amd64-v1.3.0.tgz \
downloads/containerd-1.7.8-linux-amd64.tar.gz \
downloads/kubectl \
downloads/kubelet \
downloads/kube-proxy \
configs/99-loopback.conf \
configs/containerd-config.toml \
configs/kubelet-config.yaml \
configs/kube-proxy-config.yaml \
units/containerd.service \
units/kubelet.service \
units/kube-proxy.service \
root@$host:~/
done
Provisioning a Kubernetes Worker Node
Since these commands must be run on both nodes, I will write another Ansible script for practice.
kubelet will fail to start if swap is enabled.
The Ubuntu Cloud image I'm using seems to have swap disabled by default.
To check if swap enabled, the following command will output something, if not, then swap is disabled.
swapon --show
To disable swap temporarily:
swapoff -a
To have the swap disable persist, read the documentation for the specific distro you are using.
Below is the playbook:
- name: Install and Enable Agent
hosts: kubernetes-worker-node:kubernetes-server:administration-host
become: yes
tasks:
- name: Install OS dependencies
ansible.builtin.package:
name:
- socat
- conntrack
- ipset
state: present
- name: Create required directories for Kubernetes
file:
path: "{{ item }}"
state: directory
mode: '0755'
loop:
- /etc/cni/net.d
- /opt/cni/bin
- /var/lib/kubelet
- /var/lib/kube-proxy
- /var/lib/kubernetes
- /var/run/kubernetes
- name: Create containerd directory
file:
path: /root/containerd
state: directory
mode: '0755'
- name: Extract crictl archive
unarchive:
src: /root/crictl-v1.28.0-linux-amd64.tar.gz
dest: /root/
remote_src: yes
- name: Extract containerd archive into containerd directory
unarchive:
src: /root/containerd-1.7.8-linux-amd64.tar.gz
dest: /root/containerd
remote_src: yes
- name: Extract CNI plugins into /opt/cni/bin/
unarchive:
src: /root/cni-plugins-linux-amd64-v1.3.0.tgz
dest: /opt/cni/bin/
remote_src: yes
- name: Rename runc.amd64 to runc
command: mv /root/runc.amd64 /root/runc
- name: Make crictl, kubectl, kube-proxy, kubelet, and runc executable
file:
path: "{{ item }}"
mode: a+x
loop:
- /root/crictl
- /root/kubectl
- /root/kube-proxy
- /root/kubelet
- /root/runc
- name: Move crictl, kubectl, kube-proxy, kubelet, and runc to /usr/local/bin/
command: mv /root/{{ item }} /usr/local/bin/
loop:
- crictl
- kubectl
- kube-proxy
- kubelet
- runc
- name: Move containerd binaries to /bin/
shell: mv /root/containerd/bin/* /bin/
- name: Configure CNI Networking
command: mv /root/10-bridge.conf /root/99-loopback.conf /etc/cni/net.d/
- name: Create containerd directory
file:
path: /etc/containerd
state: directory
mode: '0755'
- name: Move containerd config file
command: mv /root/containerd-config.toml /etc/containerd/config.toml
- name: Move containerd Systemd file
command: mv /root/containerd.service /etc/systemd/system/
- name: Move Kubelet config file
command: mv /root/kubelet-config.yaml /var/lib/kubelet/
- name: Move Kubelet Systemd file
command: mv /root/kubelet.service /etc/systemd/system/
- name: Move Kubernetes Proxy config file
command: mv /root/kube-proxy-config.yaml /var/lib/kube-proxy/
- name: Move Kubernetes Proxy Systemd file
command: mv /root/kube-proxy.service /etc/systemd/system/
- name: Systemd reread configs
ansible.builtin.systemd_service:
daemon_reload: true
- name: Enable containerd and start
ansible.builtin.systemd_service:
state: started
enabled: true
name: containerd
- name: Enable kubelet and start
ansible.builtin.systemd_service:
state: started
enabled: true
name: kubelet
- name: Enable kube-proxy and start
ansible.builtin.systemd_service:
state: started
enabled: true
name: kube-proxy
Verify by listing the registered Kubernetes nodes:
ssh root@server \
"kubectl get nodes \
--kubeconfig admin.kubeconfig"
The output should look like this:
NAME STATUS ROLES AGE VERSION
node-0 Ready <none> 14s v1.28.3
node-1 Ready <none> 14s v1.28.3
Configuring kubectl for Remote Access
Intro
This section will cover 10-configuring-kubectl of Kubernetes the Hard Way.
The below commands are copied from the guide. These commands should be run on the jumpbox.
Admin Kubernetes Configuration File
Change the domain as needed. It should match the domains that were setup at the beginning.
Ping the server to test reachability:
curl -k --cacert ca.crt https://server.local.lan:6443/version
The output should look like this:
{
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"compiler": "gc",
"platform": "linux/amd64"
}
Generate a kubeconfig file for authenticating as the admin
user:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.local.lan:6443
kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
The result of running the commands above should be a kubeconfig file in
~/.kube/config
. This allows you to use
kubectl
without specifying a configuration.
Verify the version of the remote Kubernetes cluster:
kubectl version
The output should look like this:
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3
List the nodes in the remote cluster:
kubectl get nodes
The output look like this:
NAME STATUS ROLES AGE VERSION
node-0 Ready <none> 16m v1.28.3
node-1 Ready <none> 16m v1.28.3
Provisioning Pod Network Routes
Intro
This section will cover 11-pod-network-routes of Kubernetes the Hard Way.
The below commands are copied from the guide. These commands should be run on the jumpbox.
Networking Configuration
Grab the IPs defined in the machines.txt
file and assign
them to variables.
SERVER_IP=$(grep server machines.txt | cut -d " " -f 1)
NODE_0_IP=$(grep node-0 machines.txt | cut -d " " -f 1)
NODE_0_SUBNET=$(grep node-0 machines.txt | cut -d " " -f 4)
NODE_1_IP=$(grep node-1 machines.txt | cut -d " " -f 1)
NODE_1_SUBNET=$(grep node-1 machines.txt | cut -d " " -f 4)
Add network route to each machine:
ssh root@server <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
ssh root@node-0 <<EOF
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
ssh root@node-1 <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
EOF
Verify by running:
ssh root@server ip route
ssh root@node-0 ip route
ssh root@node-1 ip route
Should look like this
default via XXX.XXX.XXX.XXX dev ens160
10.200.0.0/24 via XXX.XXX.XXX.XXX dev ens160
10.200.1.0/24 via XXX.XXX.XXX.XXX dev ens160
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
default via XXX.XXX.XXX.XXX dev ens160
10.200.1.0/24 via XXX.XXX.XXX.XXX dev ens160
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
default via XXX.XXX.XXX.XXX dev ens160
10.200.0.0/24 via XXX.XXX.XXX.XXX dev ens160
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
Smoke Test
Intro
This section will cover 12-smoke-test of Kubernetes the Hard Way.
The below commands are copied from the guide. These commands should be run on the jumpbox.
Data Encryption
Generate a secret:
kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
Print the hexdump of the secret:
ssh root@server \
'etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C'
Output should look something like this:
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 59 3b 81 c7 af 17 4f |:v1:key1:Y;....O|
00000050 78 e9 1a 14 60 61 fb be 56 48 b5 fe c4 f2 de b7 |x...`a..VH......|
00000060 4c fb 9a 1c 3d 5f 12 b3 b5 1d 7e b8 6b 9b 9b fc |L...=_....~.k...|
00000070 ce 5b a7 0e ee 90 8a a8 16 c1 72 c9 d3 f5 70 62 |.[........r...pb|
00000080 a1 e3 95 27 a6 27 4c d9 b3 7b a4 57 0e 18 95 6f |...'.'L..{.W...o|
00000090 2b 74 2a ce 53 52 8f 72 36 6b e3 bd 70 53 56 e1 |+t*.SR.r6k..pSV.|
000000a0 38 68 65 c0 b4 e7 e0 31 d2 10 04 14 04 88 5c 05 |8he....1......\.|
000000b0 06 f5 fe 67 1c eb 0c bf f3 80 00 be b6 f5 e3 78 |...g...........x|
000000c0 f0 59 12 e6 d5 03 1d e0 bf e1 5e 7b 2a 17 55 73 |.Y........^{*.Us|
000000d0 79 d5 d7 e9 27 19 47 34 d3 0a 13 81 7e ad 65 4b |y...'.G4....~.eK|
000000e0 d1 92 a1 89 84 de 18 c9 37 a8 7a 93 7d eb dc b9 |........7.z.}...|
000000f0 d1 d2 18 25 b5 4f d3 f7 63 1f 54 07 ca 59 07 75 |...%.O..c.T..Y.u|
00000100 11 a6 e4 44 71 8a 42 a7 af 10 c3 cc 1b 80 f4 e9 |...Dq.B.........|
00000110 49 fa 1d 11 d8 77 9e c3 72 50 da 79 36 bf da 9e |I....w..rP.y6...|
00000120 57 c5 26 da fc bd 2c 00 f7 0a 79 f3 12 d4 a9 ff |W.&...,...y.....|
00000130 4c 59 19 c3 41 a8 f3 d9 33 b9 35 59 90 fa 55 6f |LY..A...3.5Y..Uo|
00000140 a5 32 8e 7a 6a 57 9c 16 f1 42 ab 22 c6 55 87 36 |.2.zjW...B.".U.6|
00000150 55 68 dc 32 e5 de 74 73 07 0a |Uh.2..ts..|
Deployments
Create a deployment for a nginx
server:
kubectl create deployment nginx \
--image=nginx:latest
List the pod create by the deployment:
kubectl get pods -l app=nginx
The output should look like this:
NAME READY STATUS RESTARTS AGE
nginx-56fcf95486-jddd8 1/1 Running 0 2m31s
Port Forwarding
Retrieve the full name of the nginx
pod:
POD_NAME=$(kubectl get pods -l app=nginx \
-o jsonpath="{.items[0].metadata.name}")
The output should look like this:
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Open a new terminal on the jumpbox and make a HTTP request to test port forward:
curl --head http://127.0.0.1:8080
The output should look like this:
HTTP/1.1 200 OK
Server: nginx/1.27.1
Date: Sun, 18 Aug 2024 00:24:27 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Mon, 12 Aug 2024 14:21:01 GMT
Connection: keep-alive
ETag: "66ba1a4d-267"
Accept-Ranges: bytes
Close out the current terminal and return to the original one, then Ctrl-C to stop the port forwarding.
Logs
Print the nginx
pod logs:
kubectl logs $POD_NAME
The output should look like this:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2024/08/18 00:17:54 [notice] 1#1: using the "epoll" event method
2024/08/18 00:17:54 [notice] 1#1: nginx/1.27.1
2024/08/18 00:17:54 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2024/08/18 00:17:54 [notice] 1#1: OS: Linux 6.8.0-40-generic
2024/08/18 00:17:54 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/08/18 00:17:54 [notice] 1#1: start worker processes
2024/08/18 00:17:54 [notice] 1#1: start worker process 29
2024/08/18 00:17:54 [notice] 1#1: start worker process 30
127.0.0.1 - - [18/Aug/2024:00:24:27 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/8.5.0" "-"
Execution of commands in a container
Get the nginx version by running nginx -v
in the container:
kubectl exec -ti $POD_NAME -- nginx -v
The output should look like this:
nginx version: nginx/1.27.1
Services
Expose port 80 on the nginx
deployment:
kubectl expose deployment nginx \
--port 80 --type NodePort
Retrieve the node port assigned to the nginx
service:
NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
Run cURL to test if the port has been exposed successfully:
curl -I http://node-0:${NODE_PORT}
We should get the same output as before:
HTTP/1.1 200 OK
Server: nginx/1.27.1
Date: Sun, 18 Aug 2024 01:49:34 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Mon, 12 Aug 2024 14:21:01 GMT
Connection: keep-alive
ETag: "66ba1a4d-267"
Accept-Ranges: bytes