How to locate, inspect, and validate Kubernetes control plane TLS certificates, check SANs and expiry, and troubleshoot certificate-related failures
Hello — welcome to this lesson. You’ll learn how to locate, inspect, and validate TLS certificates used by Kubernetes control plane components so you can perform a cluster-wide certificate health check.Scenario: you’re a new administrator asked to audit certificates in an existing cluster. The first diagnostic step is to identify how the cluster was provisioned, because certificate storage and component invocation change depending on the provisioning method:
If the control plane runs as native systemd services (custom from-scratch installs), certificate file paths and TLS flags are visible from unit files or the running processes.
If the cluster was provisioned with kubeadm, control plane components run as static pods; manifests live under /etc/kubernetes/manifests and certificates are typically under /etc/kubernetes/pki.
Note: kubeadm places static manifests under /etc/kubernetes/manifests and certificates under /etc/kubernetes/pki by default.
When doing a certificate health check, create a simple spreadsheet (paths, CN, SANs, organization, issuer, expiry) to track the findings across all nodes. Use this spreadsheet format to guide your tracking.
Why inspect certificates?
Confirm each component presents the correct Subject Common Name (CN) for identity.
Ensure required Subject Alternative Names (SANs) — DNS and IP — are present.
Verify the issuer matches your expected CA.
Detect expired or soon-to-expire certificates to plan rotations and prevent outages.
How to identify certificate files used by control plane components
Systemd-managed control plane
If the control plane is managed via systemd, inspect unit files (typically in /etc/systemd/system or /lib/systemd/system) to find flags pointing at cert/key files. Example unit excerpt for kube-apiserver:
Inspect the static pod manifest for the kube-apiserver under /etc/kubernetes/manifests to find the exact certificate paths and filenames used by the control plane container:
Once you have the certificate file list, inspect each certificate to extract metadata (subject CN, SANs, issuer, validity, etc.).How to decode and inspect certificates
Use openssl to read a PEM certificate and inspect its fields:
Copy
$ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -nooutCertificate: Data: Version: 3 (0x2) Serial Number: 3147495682089747350 (0x2bae26a58f090396) Signature Algorithm: sha256WithRSAEncryption Issuer: CN=kubernetes Validity Not Before: Feb 11 05:39:19 2019 GMT Not After : Feb 11 05:39:20 2020 GMT Subject: CN=kube-apiserver Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:d9:69:38:80:68:3b:b7:2e:9e:25:00:e8:fd:01: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication X509v3 Subject Alternative Name: DNS:master, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:172.17.0.27
What to verify for each certificate
Subject CN matches the component (e.g., CN=kube-apiserver).
Expected DNS and IP SANs are present.
Issuer is the expected CA (kubeadm typically uses CA with CN=kubernetes).
Certificate validity period is current (check the Not After date for expiry).
Use the table in the first figure below as a pattern for your spreadsheet: include certificate path, CN, SANs, organization, issuer, and expiration.
If components run under systemd, start with the system journal to find TLS handshake and certificate errors:
Copy
$ journalctl -u etcd.service -l
Example etcd log snippets showing TLS configuration and handshake failure:
Copy
2019-02-13 02:53:28.144631 I | etcdmain: etcd Version: 3.2.182019-02-13 02:53:28.185588 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/old-ca.crt, client-cert-auth = true2019-02-13 02:53:30.080130 I | etcdserver: published {Name:master ClientURLs:[https://127.0.0.1:2379]} to clusterWARNING: 2019/02/13 02:53:30 Failed to dial 127.0.0.1:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.
If control plane components are running as static pods (kubeadm), view pod logs with kubectl. Replace <etcd-pod> with the actual pod name and add -n kube-system if necessary:
Copy
$ kubectl logs <etcd-pod> -n kube-system
Example pod log output (you may see TLS/handshake errors similar to the systemd example above).
If the control plane is down and kubectl cannot reach the API server, check the container runtime logs directly:
For CRI-based runtimes (containerd/crio) use crictl:
Copy
$ crictl ps -a$ crictl logs <container-id>
For Docker runtime:
Copy
$ docker ps -a$ docker logs <container-id>
What to do with findings
Missing SANs: reissue the certificate including the correct DNS/IP SANs.