A collection of macro benchmarks for the Python programming language
python3 -m pyperformance run --manifest $PWD/benchmarks/MANIFEST -b pyston_standard
python3 -m pyperformance run --manifest $PWD/benchmarks/MANIFEST -b all ```
The benchmarks can still be run without pyperformance. This will produce the old results format.
sh ./run_mypy.sh ```
PyPy 7.3.3-beta0 with GCC 7.3.1 20180303 (Red Hat 7.3.1-5)
OS: self-configured Linux 6.0.0 with network enabled(able to use wget, pip, and ping)
I was trying to execute
./run_all.sh pypy3 after configured all prerequisites, then I encountered this problem. Here is the trace log:
Command failed with exit code 1
Traceback (most recent call last):
File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/run_benchmark.py", line 76, in
The pyperformance project is useful for running benchmarks in a consistent way and for analyzing the results. The CPython project uses it to generate the results you can find on https://speed.python.org. The "faster cpython" project, on which I work with Guido and others, is also using it regularly.
We'd like to incorporate the benchmarks here into the suite we run. That involves getting them to run under pyperformance. (Note that pyperformance hasn't supported running external benchmarks, but I recently changed/am changing that.) I'm happy to do the work to update the benchmarks here. (Actually, I already did it, in part to verify the changes I made to pyperformance.)
So there are a few questions to answer:
Aside from that, I'll need help to verify that my changes preserve the intent of each benchmark.
Keep in mind that this change will allow you (and us) to take advantage of pyperformance for results stability and analysis, as well as posting results to speed.python.org. (You'd have to talk to @pablogsal about the possibility of posting results there.)
So, what do you think? I'd be glad to jump into a call to discuss, if that would help.