Getting started with Prometheus: Setting up and monitoring a FastAPI based python application
Published:
Getting started with Prometheus: Setting up and monitoring a FastAPI based python application
Prometheus is an opensource event monitoring and alerting system. Basically, it combines a time-series database with an alert trigger mechanism. This document summerizes my experimentation starting from installing with docker and some twicks to see what I can do with it with Python more specifically with FastAPI. As title suggests, this is just to getting started with Prometheus. One can do much more than this post with Prometheus.
Installation
As usual, get the ready to use official docker image which is easy to use for getting-started:
docker pull prom/prometheus
Then run the server as follow. In the following, we are assigning the name and redirecting docker’s 9090 port to our system’s 9090. As mentioned in the documentation, we can mount our configuration yml file by mapping it to /etc/prometheus/prometheus.yml
. Alternatively, we can also mount a folder containing the yml file to /etc/prometheus
. All settings in this file is mentioned in the official documentation.
docker run -d --name=my_prometheus_instance -p 9090:9090 -v <PATH_TO_prometheus.yml_FILE>:/etc/prometheus/prometheus.yml prom/prometheus --config.file=/etc/prometheus/prometheus.yml
Here, we’ve defined the running container’s name as my_prometheus
, therefore, after your first try in running, if you try again then you will get an error regarding the conflict in container name which is obvious. To circumvent that, always remove the earlier container by docker rm -f my_prometheus
or select a different name or remove the --name
option to have a random name. As a starting point, use the default yaml file provided by Prometheurs. In my case (remeber to use absolute path) using PowerShell. Gitbash has issue with volume mounting so I generally try to use PowerShell on Windows:
docker run --name=my_prometheus -p 9090:9090 -v $PWD\prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
If everything went well then we can see in the terminal that server is ready and we’re good to go to see the web ui on: http://localhost:9090/
Looking at the default metrics
By default, Prometheus monitors itself which means it analyze its own performance with default settings.
- The expression panel provides a way to write the query. In the search panel, if you’ll type
up
and execute it then you’ll able to seeup{instance="localhost:9090", job="prometheus"}
in Table tab. Theup
metric scraps all the end-points, in this case only the prometheus one. Its binary metric which means we see only 0 (scrap was failed) or 1 (scrap was success). One can see what prometheus is logging by going to http://localhost:9090/metrics - When we click on the Graph tab then we can also see the interactive visualization.
[Optional] System’s metrics on Windows
So, I’m using windows machine and there is an exporter available which exposes the metrics to a port which we can add to Promestheus’s config file.
- After downloading the exporter, run it by double clicking on it. You might see a warning about untrusted but click on Run Anyway.
- By default, you can see the metrics on http://localhost:9182/metrics. Now, we need to tell Promestheus about it.
- But, if you remember that we run Promestheus on Windows via linux docker, so we need to pass its ip (i.e. IPv4 Address) in the promestheus config file. E.g. add following lines in the
scrape_config
. You can get ip fromipconfig
by running it in GitBash and IPv4 is the needed one.
- job_name: "windows_exporter"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["xxx.xxx.x.17:9182"]
Then restart the server (after killing the earlier one) to see the effect. You can again try up
query to see if we’re scrapping something. To see the results in graphs, give it a minute or so. If above IP things doesn’t work for you then you can try too look for --add-host
option in docker run
. This whole process became much straightforward by spinning off the official node exported from Promestheus on a linux machine.
Instrumenting a python program
As a next step in beginner’s journey, I was interested in documenting a random python program. This is usually done via the client libraries which are available in commonaly used languages like Python, Java, Scala etc. These libraries are responsible for properly tracking and formatting the metrics as per the desired format by Prometheus.
Using default prometheus_client library in Python
- As a first step, install the prometheus_client via pip :
pip install prometheus_client
- I have a sample demo app in fast-api, for which script is as follow. You can simply save it as
fastapi-demo.py
and runuvicorn fastapi-demo:app --host=0.0.0.0 --port=8000 --log-level=debug --reload
. Then you can see this app in the locahost:8000. Here, I’ve already added few things to track metrices from our default app. Those additions are highlighted by# prometheus
in ths script.
import uvicorn
from fastapi import FastAPI
from fastapi.responses import HTMLResponse
from fastapi import Request
from prometheus_client import start_http_server, Counter # prometheus
import warnings
warnings.filterwarnings("ignore")
app_visit_count = Counter("fast_api_app_visit",
"This is help comment for user.",
["app_name", "endpoint"]) # prometheus
app = FastAPI()
@app.get("/", response_class=HTMLResponse)
async def home(request: Request):
"""
Renders an HTML page.
"""
app_visit_count.labels("homepage", "home").inc() # prometheus
html_content = """
<html>
<head>
<title>Demo-FASTApi App</title>
</head>
<body>
<h3>This has nothing to do FastAPI. It is static HTML :)</h3>
</body>
</html>
"""
return HTMLResponse(content=html_content)
metrics_port = 8001
start_http_server(metrics_port) # prometheus
# %%
if __name__ == "__main__":
app_port = 8000
# Bind to PORT if defined, otherwise default to 5000.
uvicorn.run(
"fastapi-demo:app",
port=app_port,
host="0.0.0.0",
reload=True,
)
- Correspondingly add following to your yml files:
- job_name: "prom_python_fastapi"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["xxx.xxx.x.17:8001"]
- Restart the docker container. If everything is fine, then you can look at http://localhost:9090/targets and your new app shall be there. Remeber not to use python directly which executes main because by default uvcorn will start more than 1 instance which will cause the error in Prometheus about the duplicate key.
- You can also check http://localhost:8001/ and observe newly added
fast_api_app_visit_total
is now available and it will increase as soon as you will refresh your main endpoint i.e. http://localhost:8000/ - Now, you are good to go with Prometheus dashboard. You should able to search the newly added metric in table and in graph.
Instrumenting starlette application with Starlette-Prometheus
Starlette is the Asynchronous Server Gateway Interface (ASGI) framework on which other libraries including the FastAPI is built on. Starlette-Prometheus. This particular library can instrucment basic HTTP request related items.
- As a first step, install the prometheus_client via pip :
pip install starlette-prometheus
. - Let’s have a slightly different yet simple FastAPI program for us as follow. Simply copy the following code-block content and paste into a new python file named as
star-fastapi-demo.py
; and runuvicorn star-fastapi-demo:app --host=0.0.0.0 --port=8000 --log-level=debug --reload
import uvicorn
from fastapi import FastAPI
from starlette_prometheus import metrics, PrometheusMiddleware
app = FastAPI()
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics", metrics)
@app.get('/hello-world/')
def hello_world():
return "Hello World"
@app.get('/bye-world/')
def bye_world():
return "Bye World"
- With the succesful launch, you can observer the metrices on http://localhost:8000/metrics for histograms, counts etc. for each end point by adding only 3 lines of codes!
- Correspondingly add following to your yml files:
- job_name: "prom_python_fastapi_star"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["xxx.xxx.x.17:8000"]
- Restart the docker container. If everything is fine, then you can look at http://localhost:9090/targets with green UP state and your new app shall be there.
Instrumenting starlette application with Prometheus FastAPI Instrumentator
Prometheus FastAPI Instrumentator is particularly modified for the FastAPI instrumentation. Getting started with is also very simple in few lines of code only.
- As a first step, install the prometheus_client via pip :
pip install prometheus-fastapi-instrumentator
. - We’ll use the same code as above with slight modification for this library. Simply copy the following code-block content and paste into a new python file named as
prometheus-instrumentator-demo.py
; and runuvicorn prometheus-instrumentator-demo:app --host=0.0.0.0 --port=8000 --log-level=debug --reload
import uvicorn
from fastapi import FastAPI
from prometheus_fastapi_instrumentator import Instrumentator
app = FastAPI()
Instrumentator().instrument(app).expose(app)
@app.get('/hello-world/')
def hello_world():
return "Hello World"
@app.get('/bye-world/')
def bye_world():
return "Bye World"
- With the succesful launch, you can observer the metrices on http://localhost:8000/metrics similar to our last library example. With this library, we can introduce other metrices easily as depicted here.
- Correspondingly add following to your yml files:
- job_name: "prom_python_fastapi_star"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["xxx.xxx.x.17:8000"]
- Restart the docker container. If everything is fine, then you can look at http://localhost:9090/targets with green UP state and your new app shall be there.
- As an next step, we can try several times right queries (e.g. http://127.0.0.1:8000/hello-world/) and several time wrong queries (http://127.0.0.1:8000/hello-world/xx) to have a proper logs to lay around. Now, in Graph view, we can filter as per successful queries count as
http_requests_total{status="2xx"}
or we can find total queries for a given end point withhttp_request_size_bytes_count{handler="/hello-world/"}
.