Prometheus integration for Starlette.
$ pip install starlette-prometheus
A complete example that exposes prometheus metrics endpoint under
```python from starlette.applications import Starlette from starlette_prometheus import metrics, PrometheusMiddleware
app = Starlette()
app.add_middleware(PrometheusMiddleware) app.add_route("/metrics/", metrics) ```
Metrics for paths that do not match any Starlette route can be filtered by passing
filter_unhandled_paths=True argument to
add_middleware method. Note that not
turning on this filtering can lead to unbounded memory use when lots of different
routes are called.
This project is absolutely open to contributions so if you have a nice idea, create an issue to let the community discuss it.
Bumps certifi from 2021.10.8 to 2022.12.7.
de0eae1Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...
47fb7abFix deprecation warning on Python 3.11 (#199)
b0b48e0fixes #198 -- update link in license
4151e88Add py.typed to MANIFEST.in to package in sdist (#196)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
By default it was by pid, this could generate a lot of metrics in case of many workers and multiprocess. This can overtime stack as worker processes can restart and leak metrics like this. With this change, this would remove the pid and the metric will be the same regardless of the multiprocess context
Hello there 👋
Because of the numerous limitations of the
BaseHTTPMiddleware class provided by Starlette, the Starlette dev' team is about to deprecate it and encourage people to write "pure" ASGI middlewares. In particular, one of this limitation causes the issue #33 here.
This PR is an attempt at converting the existing middleware without
BaseHTTPMiddleware. The resulting code is quite similar, the only "tricky" thing is the part where we wrap the
send function with our own; as it's the common way to do in ASGI.
All existing tests are passing.
I would be glad to discuss it and make all changes needed so we can integrate this new approach in the library.
When submounting routes, rather than the full path template being used, only the mount prefix is used.
Running the following app: ``` from starlette.applications import Starlette from starlette.middleware import Middleware from starlette.responses import Response from starlette.routing import Mount, Route from starlette_prometheus import PrometheusMiddleware, metrics
async def foo(request): return Response()
async def bar_baz(request): return Response()
routes = [ Route("/foo", foo), Mount("/bar", Route("/baz", bar_baz)), Route("/metrics", metrics), ] middleware = [Middleware(PrometheusMiddleware)] app = Starlette(routes=routes, middleware=middleware) ```
Then making the following requests:
$ curl localhost:8000/foo
$ curl localhost:8000/bar/baz
$ curl localhost:8000/metrics
Gives the following output (I only included one metric as an example, but it's the same for all of them). Note the label for the request to
localhost:8000/bar/baz has a path label of
prometheus_multiproc_dir in favor of
So I updated the example
metrics view to also respect
PROMETHEUS_MULTIPROC_DIR - wdyt?
Is there a way to show the endpoint in openapi.json when using the middleware with FastAPI?
90837dfUnboundLocalError: local variable 'status_code' referenced before assignment
Artificial Intelligence Engineer & Software ArchitectGitHub Repository
starlette prometheus metrics middleware asgi