This article explains how to use timestamps with kubectl logs for better Kubernetes troubleshooting and filtering log output by time.
In this lesson, we explore how to enhance your Kubernetes troubleshooting by displaying timestamps with the kubectl logs command and filtering log output by time. By incorporating timestamps, you gain valuable insight into when each log entry was generated, making it easier to diagnose issues effectively.
To include timestamps in your pod logs, simply add the --timestamps flag. This flag appends the generation time to each log entry, providing vital context during troubleshooting.
Copy
Ask AI
kubectl logs --timestamps
For instance, when examining logs from an NGINX Ingress controller pod, you will see timestamps displayed alongside the application logs.
Using timestamps along with log output is particularly useful for correlating log entries with system events or errors.
In some scenarios, your logging configuration might exclude timestamps. Consider the following example where log entries do not show timestamp information:
Even when log entries lack timestamps in the output, the container runtime retains the timestamp metadata. This metadata can be valuable for backend systems and further analysis.A similar case arises with application logs from a Notes app:
Copy
Ask AI
controlplane ~ ➜ k logs -n uat notes-app-deployment-d4fcc5ccd-n7nm8>[email protected] start /app> node app.jsApp is running on port 3000
To integrate timestamps, re-run the command with the --timestamps flag.
In addition to appending timestamps, you can filter log outputs based on a specific timeframe using the --since flag. This capability assists in narrowing down log entries to a relevant period, saving time during troubleshooting.For example, if your pod generates logs every second and you need to inspect the logs from the past 5 seconds, you can run:
Copy
Ask AI
kubectl logs --since=5s
Below is an example output for a 5-second window:
Copy
Ask AI
# (Example log output for a 5-second time window)I0419 15:56:11.117300 57 logs_generator.go:67] 521 POST /api/v1/namespaces/kube-system/pods/cs3 395I0419 15:56:11.117287 57 logs_generator.go:67] 522 POST /api/v1/namespaces/kube-system/pods/rs 498I0419 15:56:11.117273 57 logs_generator.go:67] 523 GET /api/v1/namespaces/default/pods/ijb 435I0419 15:56:11.117292 57 logs_generator.go:67] 524 POST /api/v1/namespaces/default/pods/678 453I0419 15:56:14.117276 57 logs_generator.go:67] 525 GET /api/v1/namespaces/kube-system/pods/xppk 375I0419 15:56:16.117263 57 logs_generator.go:67] 526 POST /api/v1/namespaces/default/pods/xrtc 325I0419 15:56:16.117268 57 logs_generator.go:67] 527 GET /api/v1/namespaces/default/pods/p2j 435I0419 15:56:17.117292 57 logs_generator.go:67] 528 POST /api/v1/namespaces/kube-system/pods/4dic 337I0419 15:56:17.117298 57 logs_generator.go:67] 529 GET /api/v1/namespaces/default/pods/qy9l 268I0419 15:56:18.117281 57 logs_generator.go:67] 530 GET /api/v1/namespaces/ns/pods/sxn 302I0419 15:56:18.117291 57 logs_generator.go:67] 531 PUT /api/v1/namespaces/ns/pods/ken 406I0419 15:56:18.117290 57 logs_generator.go:67] 532 GET /api/v1/namespaces/ns/pods/kan3 415I0419 15:56:19.117287 57 logs_generator.go:67] 533 GET /api/v1/namespaces/kube-system/pods/ks1 536I0419 15:56:19.117275 57 logs_generator.go:67] 534 POST /api/v1/namespaces/default/pods/ijf 411I0419 15:56:20.117280 57 logs_generator.go:67] 536 GET /api/v1/namespaces/default/pods/vbs 358I0419 15:56:20.117276 57 logs_generator.go:67] 537 POST /api/v1/namespaces/kube-system/pods/05h3 415I0419 15:56:20.117268 57 logs_generator.go:67] 538 GET /api/v1/namespaces/ns/pods/k61 231I0419 15:56:21.117259 57 logs_generator.go:67] 539 GET /api/v1/namespaces/ns/pods/bcr1 536I0419 15:56:21.117274 57 logs_generator.go:67] 540 PUT /api/v1/namespaces/default/pods/qk2 406I0419 15:56:24.117279 57 logs_generator.go:67] 541 GET /api/v1/namespaces/kube-system/pods/h7k 583I0419 15:56:34.117288 57 logs_generator.go:67] 542 POST /api/v1/namespaces/ns/pods/nk9 399I0419 15:56:34.117285 57 logs_generator.go:67] 543 GET /api/v1/namespaces/kube-system/pods/q3nk 343I0419 15:56:35.117296 57 logs_generator.go:67] 544 POST /api/v1/namespaces/default/pods/rrn9 272I0419 15:56:35.117276 57 logs_generator.go:67] 545 PUT /api/v1/namespaces/ns/pods/v2n 364I0419 15:56:35.117275 57 logs_generator.go:67] 546 GET /api
You can easily extend the time window as necessary. To capture logs from the past 10 seconds (or even up to 1 hour), adjust the value provided to the --since flag accordingly. Below is an example capturing logs over a longer period:
Copy
Ask AI
# (Example log output for an extended time window)I0419 15:56:17.117300 57 logs_generator.go:67] 531 PUT /api/v1/namespaces/default/pods/clbg 268I0419 15:56:18.117278 57 logs_generator.go:67] 532 GET /api/v1/namespaces/ns/pods/o3l 436I0419 15:56:18.117279 57 logs_generator.go:67] 533 POST /api/v1/namespaces/ns/pods/sxwn 302I0419 15:56:18.117286 57 logs_generator.go:67] 534 PUT /api/v1/namespaces/default/pods/6nk 406I0419 15:56:19.117282 57 logs_generator.go:67] 535 POST /api/v1/namespaces/default/pods/qq8 241I0419 15:56:20.117276 57 logs_generator.go:67] 536 PUT /api/v1/namespaces/default/pods/oh5a 415I0419 15:56:21.117265 57 logs_generator.go:67] 537 GET /api/v1/namespaces/ns/pods/k6l 231I0419 15:56:21.117270 57 logs_generator.go:67] 538 POST /api/v1/namespaces/kube-system/pods/bcr1 536I0419 15:56:23.117292 57 logs_generator.go:67] 539 GET /api/v1/namespaces/default/pods/ijf 411I0419 15:56:24.117279 57 logs_generator.go:67] 540 GET /api/v1/namespaces/pods/hdv 583I0419 15:56:25.117260 57 logs_generator.go:67] 541 GET /api/v1/namespaces/kube-system/pods/nk9 399I0419 15:56:25.117264 57 logs_generator.go:67] 542 GET /api/v1/namespaces/default/pods/h38 229I0419 15:56:25.117268 57 logs_generator.go:67] 543 POST /api/v1/namespaces/kube-system/pods/qhk 343I0419 15:56:26.117262 57 logs_generator.go:67] 544 PUT /api/v1/namespaces/ns/pods/gqh 255I0419 15:56:26.117277 57 logs_generator.go:67] 545 GET /api/v1/namespaces/ns/pods/q3nk 272I0419 15:56:26.117285 57 logs_generator.go:67] 546 PUT /api/v1/namespaces/default/pods/908j 338I0419 15:56:26.117291 57 logs_generator.go:67] 547 GET /api/v1/namespaces/ns/pods/q3nk 343I0419 15:56:27.117300 57 logs_generator.go:67] 548 POST /api/v1/namespaces/kube-system/pods/lobp 375I0419 15:56:27.117285 57 logs_generator.go:67] 549 PUT /api/v1/namespaces/default/pods/hmkd 438I0419 15:56:27.117292 57 logs_generator.go:67] 550 GET /api/v1/namespaces/kube-system/pods/n8bz 513I0419 15:56:32.117271 57 logs_generator.go:67] 551 POST /api/v1/namespaces/kube-system/pods/c4f 345I0419 15:56:32.117281 57 logs_generator.go:67] 552 PUT /api/v1/namespaces/default/pods/52b5 513I0419 15:56:35.117280 57 logs_generator.go:67] 553 GET /api/v1/namespaces/kube-system/pods/rFw4 529I0419 15:56:35.117284 57 logs_generator.go:67] 554 GET /api/v1/namespaces/ns/pods/rg94 488I0419 15:56:36.117276 57 logs_generator.go:67] 555 GET /api/v1/n
Adjusting the --since flag allows you to tailor the log output to the exact timeframe you need for effective troubleshooting.
By adding the --timestamps flag to your kubectl logs command, you enrich your log data with temporal context, which is essential for diagnosing issues in real time. Additionally, leveraging the --since flag lets you filter logs to a specific duration, simplifying the process of pinpointing errors or warnings.For more detailed information on Kubernetes log management and best practices, refer to the Kubernetes Documentation.
Both flags are crucial in environments with high logging volumes where pinpointing the exact moment of an issue can lead to faster resolutions.