Skip to content

Latest commit

 

History

History
91 lines (59 loc) · 3.98 KB

testing.md

File metadata and controls

91 lines (59 loc) · 3.98 KB

Metrics in this lab

Overview of each metric

  1. running_pods is a Gauge metric, and shows the total # of running pods in Kubernetes. It is started here in a separate thread, leverages a Python library to interrogate the Kubernetes API, and then filters by Status to return a count of running pods in your Kubernetes cluster.
  2. app_hello_world is a Counter metric, and increments each time the https://localhost:5000 endpoint is hit. It's value is incremented here.

You can experiment with other metrics using the same Python library, if you like.

Test each metric

First, we have to make it possible to view the Prometheus server's UI.

Run this command:

# Port forward to the Prometheus UI
kubectl port-forward svc/metrics-prometheus-server 8082:80 &

View both metrics in Prometheus. Follow the instructions below to inspect and test each metric.

The running_pods metric

Assuming you opened the Prometheus server's UI (above), you should see that both metrics have the same value.

kubelet_running_pods is a built-in metric that you get out-of-the-box with Prometheus. It should have the same value as our custom running_pods metric.

Scale the Python app to go up by one:

kubectl scale deployment python-with-prometheus --replicas=2

Inspect running_pods and kubelet_running_pods to see the change in Prometheus.

The app_hello_world metric

First, we have to make it possible to call the Python app.

Run this command:

# Port forward to the Python app
kubectl port-forward svc/slytherin-svc 5000:5000 &

Then, call the Python app's API however many times you like using curl:

# This will call our API and return "Hello, World!"
curl http://localhost:5000

Assuming you opened the Prometheus server's UI (above), you should see the count for app_hello_world_total go up however many times you called the API.

How does Prometheus get the metric data for these counters?

Prometheus reads text based metrics.

The text it actually ingests for this lab can be viewed here, assuming you still have the port-forward setup on port 5000. For background, the app is setup serve Prometheus metrics here.

Prometheus is installed with a Helm chart in this lab, and the chart supports annotations. Essentially, Prometheus will watch for pods that have these annotations, configure itself to use them, and then ingest their metrics.

View logs

First, we have to get access to loki-stack:

# make it accessible
kubectl port-forward service/loki-stack-grafana 3000:80 &
# export its admin password
GRAFANA=$(kubectl get secret loki-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo)
# copy this password
echo $GRAFANA

Then, you can view logs for the Python app here, after logging in of course.

Clean-up

# delete the Kubernetes cluster
k3d cluster delete
# point back at your original kubernetes context
kubectx $CURRENT_CONTEXT
# close the terminal
exit

No local machines were (hopefully) harmed in the making of this lab.