K8sGPT is an open-source diagnostic tool for Kubernetes clusters. It scans the environment, detects issues such as misconfigurations, failing services, or unhealthy resources, and explains them in plain English using large language models (LLMs). It acts like a 24×7 Kubernetes consultant, seamlessly integrated into the workflow.
K8sGPT uses large language models (LLMs) to analyze Kubernetes clusters and explain issues in simple terms. When paired with Ollama, these models run locally, ensuring data remains private an ideal setup for security-sensitive environments.
Step 1: Install K8sGPT CLI
Download and install the CLI package.
curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.4.14/k8sgpt_amd64.deb
sudo dpkg -i k8sgpt_amd64.deb
Step 2: Install Ollama Locally
Install Ollama, a lightweight runtime for running open-source language models locally on the machine. Ensure that Ollama is active and listening on the default port 11434
, which is required for K8sGPT to connect and perform local inference.
curl -fsSL https://ollama.com/install.sh | sh
Step 3: Configure K8sGPT to Use Ollama
Let’s connect K8sGPT with the Ollama backend.
k8sgpt auth add --backend ollama --baseurl http://localhost:11434/
k8sgpt auth default --provider ollama
Optional: Use OpenAI (or Another Provider)
If you’d rather use OpenAI or another supported provider instead of Ollama, here’s how:
<your-openai-api-key>
with your actual token:
k8sgpt auth add --backend openai --token
k8sgpt auth default --provider openai
You can verify your setup using:
k8sgpt auth list
apiVersion: apps/v1
kind: Deployment
metadata:
name: broken-deployment
labels:
app: broken-app
spec:
replicas: 2
selector:
matchLabels:
app: broken-app
template:
metadata:
labels:
app: broken-app
spec:
containers:
- name: broken-container
image: invalid-image-name:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: broken-service
spec:
selector:
app: mismatch-label
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: broken-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: crashloop-deployment
spec:
replicas: 1
selector:
matchLabels:
app: crash-app
template:
metadata:
labels:
app: crash-app
spec:
containers:
- name: crash-container
image: busybox
command: ["/bin/sh", "-c", "exit 1"]
---
apiVersion: v1
kind: Service
metadata:
name: incomplete-service
spec:
ports:
- port: 8080
targetPort: 8080
kubectl apply -f faulty-k8sgpt-resources.yaml
Running Your First Analysis
Run this command to analyze your cluster:
k8sgpt analyze --explain
This command will:
Scan your cluster
Detect issues (e.g., misconfigured services, unhealthy pods, failed mounts)
Explain them in simple language with suggested fixes
You’ll receive clear, human-readable insights like:
- Deployment 'broken-deployment' has containers in ImagePullBackOff
Reason: Image 'invalid-image-name:latest' not found.
- Service 'broken-service' has no ready endpoints.
Reason: No matching pods with label 'app=mismatch-label'.
- Deployment 'crashloop-deployment' is in CrashLoopBackOff.
Reason: Container exited with status 1.
Centralizing Kubernetes cluster management with AI-driven tools like K8sGPT enhances troubleshooting, simplifies issue detection, and ensures smoother operations. By utilizing a powerful AI backend, K8sGPT provides:
The AI checks your Kubernetes system carefully and tells you if anything is misconfigured or might stop working soon.
Explains problems in a simple way, so you or your team can quickly understand and act accordingly.
Your data stays safe and private because everything runs inside your system, nothing is shared outside.
Adopting a centralized AI-driven approach for Kubernetes analysis brings long-term operational efficiency, security, and resilience.
We use cookies to enhance your browsing experience, analyze traffic, and serve personalized marketing content. You can accept all cookies or manage your preferences.