Kubernetes Security

How humans and application clients authenticate to Kubernetes, how authorization is enforced with RBAC, and how ServiceAccounts power in-cluster identity. Includes practical YAML examples and real-life scenarios.

AuthN AuthZ (RBAC) ServiceAccounts OIDC Secrets

The Kubernetes Security Model

The API Server is the security gate
Every action goes through the Kubernetes API Server: kubectl, CI/CD, controllers, operators, and even “kubectl exec”. Security is enforced there via: AuthenticationAuthorizationAdmission Control.
1) AuthN
Proves identity (who are you?). Examples: OIDC token, client cert, managed identity, service account token.
2) AuthZ (RBAC)
Checks permissions (what can you do?). Roles/ClusterRoles + RoleBindings/ClusterRoleBindings.
3) Admission
Policy enforcement (should we allow this?). Examples: Pod Security, validating/mutating webhooks.

Two identities you must separate
  • Human identities (admins, developers): authenticate via enterprise IdP (OIDC) or managed solutions; then RBAC governs access.
  • Workload identities (apps in Pods): authenticate using ServiceAccounts (often via projected tokens) and then RBAC applies.

Real-life scenarios

Scenario A: Human uses kubectl
A developer logs into Azure/Google/AWS SSO, obtains a token, runs kubectl get pods. RBAC decides whether they can read pods in that namespace.
Scenario B: App calls Kubernetes API
A “job-runner” Pod lists ConfigMaps or creates Jobs. It uses its ServiceAccount token, and RBAC allows only required verbs/resources.

Kubernetes Security Topics (Accordion)

Functional goal

Authentication answers: “Who are you?” The API Server validates your identity using one of several mechanisms.

Common AuthN mechanisms
  • OIDC tokens (enterprise SSO: Entra ID, Okta, etc.)
  • Client certificates (mTLS, often for system/admin use)
  • ServiceAccount tokens (workloads in Pods)
  • Cloud-managed identity integrations (platform-specific)
Real-life example

Your organization wants developers to authenticate using corporate SSO, not shared kubeconfig files. Users login to IdP → get token → API server validates token → RBAC enforces permissions.

Minimal OIDC kubeconfig pattern (illustrative)
apiVersion: v1
kind: Config
users:
- name: dev-user
  user:
    auth-provider:
      name: oidc
      config:
        idp-issuer-url: https://idp.example.com
        client-id: k8s-cli
        # token is obtained via login flow and cached by client tooling
In managed clusters (AKS/EKS/GKE), the exact client auth flow is platform-specific, but the core idea is the same: the API server validates your identity token before RBAC is checked.

Functional goal

Authorization answers: “What are you allowed to do?” RBAC evaluates requested actions (verb + resource + namespace scope) against policies.

Key concepts
  • Role: namespace-scoped permissions
  • ClusterRole: cluster-wide or reusable permissions
  • RoleBinding: grants a Role/ClusterRole to a subject in a namespace
  • ClusterRoleBinding: grants cluster-wide permissions
YAML example: “Read-only pods” in one namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: spring-demo
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: devs-can-read-pods
  namespace: spring-demo
subjects:
  - kind: User
    name: alice@company.com
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
Real-life example
Developers can view Pods in dev namespace for troubleshooting but cannot delete them or modify Deployments.

Functional goal

A ServiceAccount is the identity used by workloads running inside the cluster. Pods can call the Kubernetes API using a token associated with their ServiceAccount. RBAC then controls what the workload can do.

Real-life example

A “deployment automation” Pod needs permission to create Jobs in its namespace (but nothing else). You create a ServiceAccount + Role + RoleBinding to enforce least privilege.

YAML example: SA + least-privilege RBAC
apiVersion: v1
kind: ServiceAccount
metadata:
  name: job-runner
  namespace: spring-demo

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: job-runner-role
  namespace: spring-demo
rules:
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["create", "get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: job-runner-binding
  namespace: spring-demo
subjects:
  - kind: ServiceAccount
    name: job-runner
    namespace: spring-demo
roleRef:
  kind: Role
  name: job-runner-role
  apiGroup: rbac.authorization.k8s.io
This is the “gold standard” pattern: each app has its own ServiceAccount, bound to the smallest RBAC role it needs.

Functional goal

Pods authenticate to the Kubernetes API by presenting a ServiceAccount token. Modern clusters often use projected service account tokens with bounded lifetime. The token is mounted into the Pod filesystem (and the API server validates it).

Real-life example

A controller Pod calls https://kubernetes.default.svc and uses the mounted token to authenticate. RBAC then determines whether it can list secrets, create jobs, patch deployments, etc.

YAML example: Pod using a ServiceAccount + explicit token projection
apiVersion: v1
kind: Pod
metadata:
  name: api-client
  namespace: spring-demo
spec:
  serviceAccountName: job-runner
  containers:
    - name: client
      image: alpine:3.20
      command: ["sh","-c"]
      args:
        - |
          apk add --no-cache curl;
          TOKEN=$(cat /var/run/secrets/tokens/k8s-token);
          curl -sSk -H "Authorization: Bearer $TOKEN" \
            https://kubernetes.default.svc/api/v1/namespaces/spring-demo/pods;
          sleep 3600
      volumeMounts:
        - name: sa-token
          mountPath: /var/run/secrets/tokens
          readOnly: true
  volumes:
    - name: sa-token
      projected:
        sources:
          - serviceAccountToken:
              path: k8s-token
              expirationSeconds: 3600
              audience: https://kubernetes.default.svc
Not all clusters require explicit token projection in YAML because Kubernetes often mounts a token automatically. Explicit projection is useful when you want control over token lifetime and audience.

Functional goal

Humans should authenticate using corporate identity (SSO), then receive RBAC permissions based on team role. This avoids shared credentials and supports auditing.

Real-life example

A platform engineer can manage cluster-wide resources; a developer can only view logs and restart pods inside the dev namespace.

YAML example: ClusterRole for namespace troubleshooting
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: namespace-troubleshooter
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/log", "events"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: dev-troubleshooting-access
  namespace: spring-demo
subjects:
  - kind: Group
    name: dev-team
roleRef:
  kind: ClusterRole
  name: namespace-troubleshooter
  apiGroup: rbac.authorization.k8s.io
Practical result
Developers can read logs/events in spring-demo but cannot scale deployments or modify services.

Functional goal

Keep configuration out of container images. Store non-sensitive values in ConfigMaps and sensitive values in Secrets. Control access via RBAC and limit what workloads can read.

Real-life example

The UI uses a public API URL from ConfigMap and a database password from Secret. Only the app’s ServiceAccount is allowed to read those objects.

YAML example: inject ConfigMap + Secret as env vars
apiVersion: v1
kind: Pod
metadata:
  name: spring-ui-api
  namespace: spring-demo
spec:
  serviceAccountName: spring-ui-api-sa
  containers:
    - name: app
      image: myacr.azurecr.io/spring-ui-api:1.0
      env:
        - name: API_BASE_URL
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: API_BASE_URL
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: DB_PASSWORD
For strong secret protection, production platforms commonly integrate external secret stores (ex: Azure Key Vault) and mount secrets via CSI drivers, instead of keeping them in Kubernetes etcd.

Functional goal

Admission controls enforce organizational standards before objects are stored in etcd. Examples: enforce non-root containers, block privileged pods, require resource limits, enforce image registry allow-lists.

Real-life example

Your security team prevents developers from deploying privileged containers to production and requires all pods to specify CPU/memory requests and limits.

YAML example: Pod Security Standards via namespace labels
apiVersion: v1
kind: Namespace
metadata:
  name: spring-demo
  labels:
    pod-security.kubernetes.io/enforce: "baseline"
    pod-security.kubernetes.io/audit: "baseline"
    pod-security.kubernetes.io/warn: "baseline"
Practical result
Risky pod configurations are rejected or surfaced as warnings, before they ever run in the cluster.
The security “golden flow”
Humans authenticate via enterprise SSO → RBAC grants least privilege in namespaces.
Apps run with unique ServiceAccounts → RBAC binds minimal permissions → admission policies enforce guardrails.