Skip to main content

14 posts tagged with "kubernetes"

View All Tags

How to Simplify Kubernetes Deployments with Kluctl: A Beginner's Guide

· 9 min read

intro-to-kluctl

Introduction

Kubernetes has revolutionized the way we manage containerized applications, providing a robust and scalable platform for deploying, scaling, and operating these applications. However, despite its powerful capabilities, managing Kubernetes deployments can sometimes be challenging. Popular tools like Helm and Kustomize have become the standard for many teams, but they might not always meet every need.

This is where Kluctl steps in. Kluctl is a deployment tool for Kubernetes that aims to combine the strengths of Helm and Kustomize while addressing their limitations. It provides a more flexible and declarative approach to Kubernetes deployments, making it an excellent choice for those seeking an alternative.

In this blog, we'll explore Kluctl's unique features and how it can streamline your Kubernetes deployment process. Whether you're an experienced Kubernetes user or just getting started, this guide will provide valuable insights into why Kluctl might be the tool you've been looking for.

An interactive version of this blog is available on killercoda.com:

interactive
scenariointeractive scenario

What is Kluctl?

Kluctl is a modern deployment tool designed specifically for Kubernetes. It aims to simplify and enhance the deployment process by combining the best aspects of Helm and Kustomize, while also addressing some of their shortcomings. With Kluctl, you can manage complex Kubernetes deployments more efficiently and with greater flexibility.

Key Features of Kluctl

  1. Declarative Configuration: Kluctl allows you to define your deployments declaratively using YAML files. This approach ensures that your deployments are consistent and reproducible.

  2. GitOps Ready: Kluctl integrates seamlessly with GitOps workflows, enabling you to manage your deployments via Git. This integration supports continuous deployment practices and makes it easier to track changes and rollbacks.

  3. Flexible and Modular: Kluctl supports modular configurations, making it easy to reuse and share components across different projects. This modularity reduces duplication and enhances maintainability.

  4. Validation and Diffing: One of Kluctl's standout features is its built-in validation and diffing capabilities. Before applying changes, Kluctl shows you what changes will be made, allowing you to review and approve them. This feature helps prevent accidental misconfigurations and ensures deployments are accurate.

source source: https://kluctl.io/

Why Choose Kluctl?

  • Enhanced Flexibility: Kluctl provides a higher degree of flexibility compared to traditional tools like Helm and Kustomize. It enables you to customize and manage your deployments in a way that best fits your workflow and organizational needs.

  • Improved Collaboration: By leveraging GitOps, Kluctl enhances collaboration within teams. All deployment configurations are stored in Git, making it easy for team members to review, suggest changes, and track the history of deployments.

  • Reduced Complexity: Kluctl simplifies the deployment process, especially for complex applications. Its modular approach allows you to break down deployments into manageable components, making it easier to understand and maintain your Kubernetes configurations.

In summary, Kluctl is a powerful tool that enhances the Kubernetes deployment experience. Its declarative nature, seamless GitOps integration, and advanced features make it an excellent choice for teams looking to improve their deployment workflows.

Installing Kluctl

Getting started with Kluctl is straightforward. The following steps will guide you through the installation process, allowing you to set up Kluctl on your local machine and prepare it for managing your Kubernetes deployments.

Step 1: Install Kluctl CLI

First, you need to install the Kluctl command-line interface (CLI). The CLI is the primary tool you'll use to interact with Kluctl.

To install the Kluctl CLI, run the following command:

curl -sSL https://github.com/kluctl/kluctl/releases/latest/download/install.sh | bash

This script downloads and installs the latest version of Kluctl. After the installation is complete, verify that Kluctl has been installed correctly by checking its version:

kluctl version

You should see output indicating the installed version of Kluctl, confirming that the installation was successful.

Step 2: Set Up a Kubernetes Cluster

Before you can use Kluctl, you need to have a Kubernetes cluster up and running. Kluctl interacts with your cluster to manage deployments, so it's essential to ensure that you have a functioning Kubernetes environment. If you haven't set up a Kubernetes cluster yet, you can refer to my previous blogs for detailed instructions on setting up clusters using various tools and services like Minikube, Kind, GKE, EKS, or AKS.

Kluctl in action

Next we will setup a basic kluctl project. To start using kluctl define a .kluctl.yaml file in the root of your project with the targets where you want to deploy.

Let's create a folder for our project and create a *.kluctl.yaml *file in it.

mkdir kluctl-project && cd kluctl-project

cat <<EOF > .kluctl.yaml
discriminator: "kluctl-demo-{{ target.name }}"

targets:
- name: dev
context: kubernetes-admin@kubernetes
args:
environment: dev
- name: prod
context: kubernetes-admin@kubernetes
args:
environment: prod

args:
- name: environment
EOF

This file defines two targets, dev and prod, that will deploy to the same Kubernetes cluster.

We can use the args section to define the arguments that we will use in our YAML files to template them. For example {{ args.environment }} would output dev or prod depending on the target we are deploying to.

Create Deployment

Next we will create a kustomize deployment for redis application. Under the hood kluctl uses kustomize to manage the Kubernetes manifests. kustomize is a tool that lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. > 💡 we are following a tutorial from the kluctl documentation Basic Project Setup Introduction

Let's create a deployment.yaml where we will define elements that kluctl will use to deploy the application.

cat <<EOF > deployment.yaml
deployments:
- path: redis

commonLabels:
examples.kluctl.io/deployment-project: "redis"
EOF

Now we need to create redis the deployment folder.

mkdir redis && cd redis

Since we are using kustomize we need to create a kustomization.yaml file.

cat <<EOF > kustomization.yaml
resources:
- deployment.yaml
- service.yaml
EOF

And now we can create the service.yaml and deployment.yaml files.

cat <<EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cart
spec:
selector:
matchLabels:
app: redis-cart
template:
metadata:
labels:
app: redis-cart
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
readinessProbe:
periodSeconds: 5
tcpSocket:
port: 6379
livenessProbe:
periodSeconds: 5
tcpSocket:
port: 6379
volumeMounts:
- mountPath: /data
name: redis-data
resources:
limits:
memory: 256Mi
cpu: 125m
requests:
cpu: 70m
memory: 200Mi
volumes:
- name: redis-data
emptyDir: {}
EOF

cat <<EOF > service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-cart
spec:
type: ClusterIP
selector:
app: redis-cart
ports:
- name: redis
port: 6379
targetPort: 6379
EOF

Deploy the app

Next, we will deploy the redis application to the dev target. First, we need to change to the root of the kluctl-project repository and initialize a git repository there.

cd /root/kluctl-project && \
git init && \
git add . && \
git commit -m "Initial commit"

Now we can deploy the application to dev environment.

kluctl deploy --yes -t dev

💡 Notice that we are using the --yes flag to avoid the confirmation prompt. This is useful for the scenario, but in real life you should always review the changes before applying them.

Handling Changes

Next we will introduce changes to our setup and see how kluctl handles them. Let's see what we have deployed so far by executing tree command.

.
|-- deployment.yaml
|-- kustomization.yaml
`-- redis
|-- deployment.yaml
|-- kustomization.yaml
`-- service.yaml

1 directory, 5 files

💡 Notice this resembles a typical kustomize directory structure.

One of the superpowers of kluctl is how transparently it handles changes. Let's modify the redis deployment and see how kluctl handles it.

yq -i eval '.spec.replicas = 2' redis/deployment.yaml

Now let's deploy the changes to the dev target.

kluctl deploy --yes -t dev

Remember at the beginning, we have added custom labels to each deployment. Let's see if the labels were correctly applied.

kubectl get deployments -A --show-labels

Templating

Next, we will use templating capabilities of kluctl to deploy the same application to a different namespace At the beginning of the workshop, we have two different environments; prod and dev. This setup works out of the box for multiple targets (clusters), however in our case, we want to have a single target (cluster) and we want to deploy different targets to different namespaces.

Let's start by deleting the existing resources and modifying some files. > 💡 It is possible to migrate the resources to a different namespace using the kluctl prune command. However, in this case, we will delete the old resources and recreate them in new namespaces.

kluctl delete --yes -t dev

In order to differentiate between the two environments, we will need to adjust the discriminator field in the .kluctl.yaml file.

yq e '.discriminator = "kluctl-demo-{{ target.name }}-{{ args.environment }}"' -i .kluctl.yaml

We also need to create a namespace folder and yaml and add it to our kustomization.yaml file.

First create the namespace folder.

mkdir namespace

Now we can add the namespace folder to the kustomization.yaml file. > 💡 Notice the use of barrier: true in the kustomization.yaml file. This tells kluctl to apply the resources in the order they are defined in the file and wait for the resource before the barrier to be ready before applying the next ones

cat <<EOF > deployment.yaml
deployments:
- path: namespace
- barrier: true
- path: redis
commonLabels:
examples.kluctl.io/deployment-project: "redis"
overrideNamespace: kluctl-demo-{{ args.environment }}
EOF

Now let's create the namespace YAML file.

cat <<EOF > ./namespace/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: kluctl-demo-{{ args.environment }}
EOF

Test the deployment

We will test if our setup works by deploying the redis application to the dev and prod namespaces. Deploying the resources to the dev namespace:

kluctl deploy --yes -t dev

And to the prod namespace:

kluctl deploy --yes -t prod

Let's check if everything deployed as expected:

kubectl get pods,svc -n kluctl-demo-dev
kubectl get pods,svc -n kluctl-demo-prod

Closing thoughts

That's it! We have seen basic capabilities of kluctl.

We have barely scratched the surface of kluctl capabilities. You can use it to deploy to multiple clusters, namespace, and even different environments.

The mix of templating capabilities based on jinja2 and kustomize architecture makes it a really flexible tool for complex deployments.

Next Steps


Thanks for taking the time to read this post. I hope you found it interesting and informative.

🔗 Connect with me on LinkedIn

🌐 Visit my blogs on Medium

GitOpsify Cloud Infrastructure with Crossplane and Flux

· 9 min read

In this article we are going to learn how to automate the provisioning of cloud resources via Crossplane and combine it with GitOps practices.

You will most benefit from this blog if you are a Platform or DevOps Engineer, Infrastructure Architect or Operations Specialist.

If you are new to GitOps, read more about it in my blog GitOps with Kubernetes

Let's set the stage by imagining following context. We are working as a part of a Platform Team in a large organization. Our goal is to help Development Teams to onboard get up to speed with using our Cloud Infrastructure. Here are a few base requirements:

  • Platform Team doesn't have resources to deal with every request individually, so there must be a very high degree of automation
  • Company policy is to adopt the principle of least privilege. We should expose the cloud resources only when needed with the lowest permissions necessary.
  • Developers are not interested in managing cloud, they should only consume cloud resources without even needing to login to a cloud console.
  • New Teams should get their own set of cloud resources when on-boarding to the Platform.
  • It should be easy to provision new cloud resources on demand.

Initial Architecture

The requirements lead us to an initial architecture proposal with following high level solution strategy.

  • create template repositories for various types of workloads (using Backstage Software Templates would be helpful)
  • once a new Team is on boarded and creates first repository from a template, it will trigger a CI pipeline and deploy common infrastructure components by adding the repository as Source to Flux infrastructure repo
  • once a Team wants to create more cloud infrastructure, they can place the Crossplane claim YAMLs in the designated folder in their repository
  • adjustments to this process are easily implemented using Crossplane Compositions

In real world scenario we would manage Crossplane also using Flux, but for demo purposes we are focusing only on the application level.

The developer experience should be similar to this:

TeamBootstrap

Tools and Implementation

Knowing the requirements and initial architecture, we can start selecting the tools. For our example, the tools we will use are Flux and Crossplane.

We are going to use Flux as a GitOps engine, but the same could be achieved with ArgoCD or Rancher Fleet.

Let's look at the architecture and use cases that both tools support.

Flux Architecture Overview

Flux exposes several components in the form of Kubernetes CRDs and controllers that help with expressing a workflow with GitOps model. Short description of 3 major components. All those components have their corresponding CRDs.

flux-architecture source: https://github.com/fluxcd/flux2

  1. Source Controller Main role is to provide standardized API to manage sources of the Kubernetes deployments; Git and Helm repositories.

    apiVersion: source.toolkit.fluxcd.io/v1beta1
    kind: GitRepository
    metadata:
    name: podinfo
    namespace: default
    spec:
    interval: 1m
    url: https://github.com/stefanprodan/podinfo
  2. Kustomize Controller This is a CD part of the workflow. Where source controllers specify sources for data, this controller specifies what artifacts to run from a repository.

    This controller can work with kustomization files, but also plain Kubernetes manifests

    apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    kind: Kustomization
    metadata:
    name: webapp
    namespace: apps
    spec:
    interval: 5m
    path: "./deploy"
    sourceRef:
    kind: GitRepository
    name: webapp
    namespace: shared
  3. Helm Controller This operator helps managing Helm chart releases containing Kubernetes manifests and deploy them onto a cluster.

    apiVersion: helm.toolkit.fluxcd.io/v2beta1
    kind: HelmRelease
    metadata:
    name: backend
    namespace: default
    spec:
    interval: 5m
    chart:
    spec:
    chart: podinfo
    version: ">=4.0.0 <5.0.0"
    sourceRef:
    kind: HelmRepository
    name: podinfo
    namespace: default
    interval: 1m
    upgrade:
    remediation:
    remediateLastFailure: true
    test:
    enable: true
    values:
    service:
    grpcService: backend
    resources:
    requests:
    cpu: 100m
    memory: 64Mi

Crossplane Architecture Overview

Let’s look how the Crossplane component model looks like. A word of warning, if you are new to Kubernetes this might be overwhelming, but there is value in making an effort to understand it. The below diagram shows the Crossplane component model and its basic interactions.

Crossplane-architecture Source: Author based on Crossplane.io

Learn more about Crossplane in my blog "Infrastructure as Code: the next big shift is here"

Demo

If you want to follow along with the demo, clone this repository, it contains all the scripts to run the demo code.

Prerequisites

In this demo, we are going to show how to use Flux and Crossplane to provision an EC2 instance directly from a new GitHub repository. This simulates a new team on boarding to our Platform.

To follow along, you will need AWS CLI configured on your local machine.

Once you obtain credentials, configure default profile for AWS CLI following this tutorial.

Locally installed you will need:

  • Docker Desktop or other container run time
  • WSL2 if using Windows
  • kubectl

Run make in the root folder of the project, this will:

If you are running on on Mac, use make setup_mac instead of make.

  • Install kind (Kubernetes IN Docker) if not already installed
  • Create kind cluster called crossplane-cluster and swap context to it
  • Install crossplane using helm
  • Install crossplane CLI if not already installed
  • Install flux CLI if not already installed
  • Install AWS provider on the cluster
  • Create a temporary file with AWS credentials based on default CLI profile
  • Create a secret with AWS credentials in crossplane-system namespace
  • Configure AWS provider to use the secret for provisioning the infrastructure
  • Remove the temporary file with credentials so it's not accidentally checked in the repository

Following tools need to be installed manually

IMPORTANT: The demo code will create a small EC2 Instance in eu-centra-1 region. The instance and underlying infrastructure will be removed as part of the demo, but please make sure all the resources were successfully removed and in case of any disruptions in the demo flow, be ready to remove the resources manually.

Setup Flux Repository

  • create a new kind cluster with make, this will install Crossplane with AWS provider and configure secret to access selected AWS account

    Flux CLI was installed as put of the Makefile scrip, but optionally you can configure shell completion for the CLI . <(flux completion zsh)

    Refer to the Flux documentation page for more installation options

  • create access token in GitHub with full repo permissions. github-token
  • export variables for your GitHub user and the newly created token
    • export GITHUB_TOKEN=<token copied form GitHub>
    • export GITHUB_USER=<your user name>
  • use flux to bootstrap a new GitHub repository so flux can manage itself and underlying infrastructure

    Flux will look for GITHUB_USER and GITHUB_TOKEN variables and once found will create a private repository on GitHub where Flux infrastructure will be tracked.

    flux bootstrap github  \
--owner=${GITHUB_USER} \
--repository=flux-infra \
--path=clusters/crossplane-cluster \
--personal

Setup Crossplane EC2 Composition

Now we will install a Crossplane Composition that defines what cloud resources to crate when someone asks for EC2 claim.

  • setup Crossplane composition and definition for creating EC2 instances

    • kubectl crossplane install configuration piotrzan/crossplane-ec2-instance:v1
  • fork repository with the EC2 claims

    • gh repo fork https://github.com/Piotr1215/crossplane-ec2

      answer YES when prompted whether to clone the repository

Clone Flux Infra Repository

  • clone the flux infra repository created in your personal repos

    git clone git@github.com:${GITHUB_USER}/flux-infra.git

    cd flux-infra

Add Source

  • add source repository to tell Flux what to observe and synchronize

    Flux will register this repository and every 30 seconds check for changes.

  • execute below command in the flux-infra repository, it will add a Git Source

    flux create source git crossplane-demo \
    --url=https://github.com/${GITHUB_USER}/crossplane-ec2.git \
    --branch=master \
    --interval=30s \
    --export > clusters/crossplane-cluster/demo-source.yaml
  • the previous command created a file in clusters/crossplane-cluster sub folder, commit the file

    • git add .
    • git commit -m "Adding Source Repository"
    • git push
  • execute kubectl get gitrepositories.source.toolkit.fluxcd.io -A to see active Git Repositories sources in Flux

Create Flux Kustomization

  • setup watch on the AWS managed resources, for now there should be none

    watch kubectl get managed

  • create Flux Kustomization to watch for specific folder in the repository with the Crossplane EC2 claim

    flux create kustomization crossplane-demo \
    --target-namespace=default \
    --source=crossplane-demo \
    --path="./ec2-claim" \
    --prune=true \
    --interval=1m \
    --export > clusters/crossplane-cluster/crossplane-demo.yaml
    • git add .
    • git commit -m "Adding EC2 Instance"
    • git push
  • after a minute or so you should see a new EC2 Instance being synchronized with Crossplane and resources in AWS

new-instance-sync

Let's take a step back and make sure we understand all the resources and repositories used.

repos

The first repository we have created is what Flux uses to manage itself on the cluster as well as other repositories. In order to tell Flux about a repository with Crossplane EC2 claims, we have created a GitSource YAML file that points to HTTPS address of the repository with the EC2 claims.

The EC2 claims repository contains a folder where plain Kubernetes manifest files are located. In order to tell Flux what files to observe, we have created a Kustomization and linked it with GitSource via its name. Kustomization points to the folder containing K8s manifests.

Cleanup

  • to cleanup the EC2 Instance and underlying infrastructure, remove the claim-aws.yaml demo from the crossplane-ec2 repository
    • rm ec2-claim/claim-aws.yaml
    • git add .
    • git commit -m "EC2 instance removed"
  • after a commit or timer lapse Flux will synchronize and Crossplane will pick up removed artefact and delete cloud resources

    the ec2-claim folder must be present in the repo after the claim yaml is removed, otherwise Flux cannot reconcile

Manual Cleanup

In case you cannot use the repository, it's possible to cleanup the resources by deleting them from flux.

  • deleting Flux kustomization flux delete kustomization crossplane-demo will remove all the resources from the cluster and AWS
  • to cleanup the EC2 Instance and underlying infrastructure, remove the ec2 claim form the cluster kubectl delete VirtualMachineInstance sample-ec2

Cluster Cleanup

  • wait until watch kubectl get managed output doesn't contain any AWS resources
  • delete the cluster with make cleanup
  • optionally remove the flux-infra repository

Summary

GitOps with Flux or Argo CD and Crossplane offers a very powerful and flexible model for Platform builders. In this demo, we have focused on applications side with Kubernetes clusters deployed some other way, either with Crossplane or Fleet or Cluster API etc.

What we achieved on top of using Crossplane’s Resource Model is the fact that we do not interact with kubectl directly any longer to manage resources, but ratter delegate this activity to Flux. Crossplane still runs on the cluster and reconciles all resources. In other words, we've moved the API surface from kubectl to Git.

Kubernetes Cookbook

· 14 min read

intro-pic

Overview

This documentation assumes basic knowledge of Kubernetes and kubectl. To learn or refresh on container orchestration related concepts, please refer to the official documentation:

How to Create Kubernetes Homelab

· 7 min read

Photo by Clay Banks on Unsplash

How to create Kubernetes home lab on an old laptop

Introduction

I’ve recently “discovered” and old laptop forgotten somewhere in the depths of my basement and decided to create a mini home lab with it where I could play around with Kubernetes.

Cloud Native Developer Workflow

· 7 min read

Photo by Hack Capital on Unsplash

Cloud Native - Developer Workflow

Software Development Lifecycle with Kubernetes and Docker

Introduction

Software development tooling and processes have evolved rapidly in last decade to meet growing needs of developers. On top of mastering, often a few, programing languages and paradigms, software developers must learn to navigate increasingly complex landscape of tools and processes.

Expose Kubernetes Service Using Ngrok

· 4 min read

Photo by Product School on Unsplash

Expose local Kubernetes service on internet using ngrok

Working with local Kubernetes cluster such as minikube, k3s, microk8s or others is great for testing new features, experimenting and running POCs. Once you are ready with a cool new functionality or just want to share quickly results of your work with colleagues or customers, well you have to push everything to an online cluster. It might not be an issue if you have good CI/CD pipeline setup, but most of the time it’s simply too much effort for a simple one-off demo.