EFK Stack: Enterprise-Grade Logging and Monitoring
Instrumenting a Simple Python App for Logging
Configuring Fluent Bit to Collect Python Application Logs
Hello and welcome back to this lesson.
In our previous lesson, we deployed the login application and observed how user interactions on the frontend were accurately recorded in our logs. We also discussed the importance of configuring logging within the application code to capture all relevant details.
Now that our application is generating logs, the next step is to use Fluent Bit to transport these logs to Elasticsearch. Fluent Bit offers an efficient method for forwarding logs, and in earlier demonstrations, we configured it for our login app. In this lesson, we will cover how a minimal change in the Fluent Bit configuration allows us to reuse the same setup to collect Python application logs.
Verifying the Kubernetes Environment
Before making any changes, ensure that your application is running correctly within your Kubernetes cluster. Verify your pods with:
kubectl get pods
You should see an output similar to:
NAME READY STATUS RESTARTS AGE
elasticsearch-0 1/1 Running 0 10m
kibana-5bfc7f6b4-9tx9l 1/1 Running 0 10m
simple-webapp-deployment-655956679c5-fqqj 1/1 Running 0 8m37s
Next, verify the services by running:
kubectl get svc
This command should produce a result like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch NodePort 10.108.117.102 <none> 9200:30200/TCP,9300:33000/TCP 10m
kibana NodePort 10.06.233.166 <none> 5601:30601/TCP 10m
simple-webapp-service NodePort 10.101.197.233 <none> 80:30001/TCP 8m14s
Tip
Ensure that your Kubernetes cluster is healthy and all related pods are running before applying configuration changes.
Reviewing Fluent Bit Deployment Files
Within the Kubernetes deployments directory, you will find several Fluent Bit-related files that were deployed for the login application. These files include:
- fluent-bit.yaml
- Service account configuration
- ConfigMap
- ClusterRole
- ClusterRoleBinding
For example, you might see the following files when listing the directory contents:
ls -lrt
total 28
-rw-r--r-- 1 root root 271 Jul 7 11:13 python-app-service.yaml
-rw-r--r-- 1 root root 250 Jul 7 11:13 python-app-deployment.yaml
-rw-r--r-- 1 root root 419 Jul 7 11:13 fluent-bit.yaml
-rw-r--r-- 1 root root 182 Jul 7 11:13 fluent-bit-config.yaml
-rw-r--r-- 1 root root 181 Jul 7 11:13 fluent-bit-clusterrole.yaml
-rw-r--r-- 1 root root 260 Jul 7 11:13 fluent-bit-clusterrolebinding.yaml
In this lesson, we will modify a single line in the Fluent Bit configuration file (fluent-bit-config.yaml
) to monitor the logs of our Python application.
Updating the Fluent Bit Configuration
To adapt Fluent Bit to collect Python application logs, update the [INPUT]
section in fluent-bit-config.yaml
. Change the Path
to point to the log file generated by the Python application as shown below:
Path /var/log/containers/*simple-webapp-deployment*.log
This update ensures that Fluent Bit captures the logs using the proper metadata labels for our Python application. Below is the updated configuration file with the corrected settings. Notice that only the [INPUT]
section has been changed:
[PARSER]
Name docker_no_time
Format json
Time_Keep Off
Time_Key time
Time_Format '%Y-%m-%dT%H:%M:%S.%L'
[SERVICE]
Daemon Off
Flush 1
Log_Level info
Parsers_File /fluent-bit/etc/parsers.conf
Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
Health_Check On
[INPUT]
Name tail
Path /var/log/containers/*simple-webapp-deployment*.log
Parser docker, cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[FILTER]
Name kubernetes
Match kube.*
Keep_Log On
K8S-Logging.Parser On
If you need to collect logs from multiple sources within the same namespace, you can simply add another input section without redeploying a separate Fluent Bit instance. For example:
[PARSER]
Name docker_no_time
Format json
Time_Keep Off
Time_Key time
Time_Format '%Y-%m-%dT%H:%M:%S.%L'
[INPUT]
Name tail
Path /var/log/containers/*simple-webapp-deployment*.log
Parser docker, cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
[FILTER]
Name kubernetes
Match kube.*
Keep_Log On
K8S-Logging.Parser On
Fluent Bit will automatically detect and process multiple inputs defined in the same configuration file.
Configuration Reminder
When updating your Fluent Bit configuration, always verify that your changes are correctly applied to prevent disruptions in log collection.
Deploying the Updated Configuration
After updating the configuration, clear your terminal and run the following command to deploy the changes:
kubectl apply -f .
Verify that all pods are running by using:
kubectl get pods
You should see an output similar to:
NAME READY STATUS RESTARTS AGE
elasticsearch-6b 1/1 Running 0 12m
fluent-bit-8db 1/1 Running 0 13s
kibana-5bfcf6644-9t9xl 1/1 Running 0 12m
simple-webapp-deployment-655969679c5-fqqj 1/1 Running 0 11m
Next, check the logs of the Fluent Bit pod to confirm that it is picking up your application logs:
kubectl logs -f fluent-bit-qmrbw
A portion of the log output might look like this:
[2024/07/07 11:26:08] [ info] [fluent bit] version=0.3.0, commit=3529bb132, pid=1
[2024/07/07 11:26:08] [ info] [storage] ver=1.5.2, type=normal, sync=normal, checksum=off, max_chunks_up=128
[2024/07/07 11:26:08] [ info] [metrics] version=0.9.0
[2024/07/07 11:26:08] [ info] [cmetrics] version=0.2.1
[2024/07/07 11:26:08] [ info] [input:tail] 0 initializing
[2024/07/07 11:26:08] [ info] [input:tail] 1 initializing
[2024/07/07 11:26:08] [ info] [input:tail] line 0 started
[2024/07/07 11:26:08] [ info] [input:tail] line 1 started
[2024/07/07 11:26:08] [ info] [filter:kubernetes:kubernetes.0] https://host=kubernetes.default.svc port=443
[2024/07/07 11:26:08] [ info] [filter:kubernetes:kubernetes.0] local POD info
[2024/07/07 11:26:08] [ info] [filter:kubernetes:kubernetes.0] entry rules match
[2024/07/07 11:26:08] [ info] [filter:kubernetes:kubernetes.0] could not get meta for POD fluent-bit-qmrbw
[2024/07/07 11:26:08] [ info] [output:worker] output worker is starting...
[2024/07/07 11:26:08] [ info] [output:worker] worker #0 started
[2024/07/07 11:26:08] [ info] [output:worker] worker #1 started
The Fluent Bit pod is now successfully collecting logs from your Python (or login) application. The subsequent step involves pushing these logs to Elasticsearch where an index (commonly the Logstash index) will be created. In a future lesson, we will discuss building an effective dashboard in Kibana using these logs.
Next Steps
After confirming that the logs are being collected correctly, explore further enhancements such as filtering and parsing to improve log analytics.
That’s it for this lesson. Thank you for following along, and see you in the next lesson!
Additional Resources
Watch Video
Watch video content
Practice Lab
Practice lab