A collection of macro benchmarks for the Python programming language

pyston, updated 🕥 2023-02-23 14:23:45


A collection of macro benchmarks for the Python programming language



Run the benchmarks that Pyston uses to measure itself:

python3 -m pyperformance run --manifest $PWD/benchmarks/MANIFEST -b pyston_standard

Run (almost) all the benchmarks in the repository:

python3 -m pyperformance run --manifest $PWD/benchmarks/MANIFEST -b all ```

The benchmarks can still be run without pyperformance. This will produce the old results format.


Run the benchmarks:

sh ./run_all.sh

Run the mypy benchmark using mypyc:

sh ./run_mypy.sh ```


waitUntilUp() raise Exception('Timeout reached when trying to connect')

opened on 2022-12-20 01:42:25 by bigjr-mkkong

python version: PyPy 7.3.3-beta0 with GCC 7.3.1 20180303 (Red Hat 7.3.1-5) OS: self-configured Linux 6.0.0 with network enabled(able to use wget, pip, and ping)

I was trying to execute ./run_all.sh pypy3 after configured all prerequisites, then I encountered this problem. Here is the trace log: ``` [1/1] flaskblogging...

/root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/bin/python -u /root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/run_benchmark.py --output /tmp/tmpwxtbfkph --inherit-enviroD

Command failed with exit code 1 Traceback (most recent call last): File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/run_benchmark.py", line 76, in with context: File "/root/pypy3.7-v7.3.3-linux64/lib-python/3/contextlib.py", line 112, in enter return next(self.gen) File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/netutils.py", line 36, in serving waitUntilUp(addr) File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/netutils.py", line 66, in waitUntilUp raise Exception('Timeout reached when trying to connect') Exception: Timeout reached when trying to connect ERROR: Benchmark flaskblogging failed: Benchmark died Traceback (most recent call last): File "/tmp/benchmark_env/site-packages/pyperformance/run.py", line 147, in run_benchmarks verbose=options.verbose, File "/tmp/benchmark_env/site-packages/pyperformance/_benchmark.py", line 191, in run verbose=verbose, File "/tmp/benchmark_env/site-packages/pyperformance/_benchmark.py", line 232, in _run_perf_script raise RuntimeError("Benchmark died") RuntimeError: Benchmark died ```

Use pyperformance to run the benchmarks.

opened on 2021-11-17 18:37:26 by ericsnowcurrently

The pyperformance project is useful for running benchmarks in a consistent way and for analyzing the results. The CPython project uses it to generate the results you can find on https://speed.python.org. The "faster cpython" project, on which I work with Guido and others, is also using it regularly.

We'd like to incorporate the benchmarks here into the suite we run. That involves getting them to run under pyperformance. (Note that pyperformance hasn't supported running external benchmarks, but I recently changed/am changing that.) I'm happy to do the work to update the benchmarks here. (Actually, I already did it, in part to verify the changes I made to pyperformance.)

So there are a few questions to answer:

  • are there any objections to updating these benchmarks to work with pyperformance? (I'll do the work.)
  • would it be okay if the output format from the benchmarks changes?
  • would it be okay to change the command for invoking these benchmarks? (use pyperformance directly instead of the existing "run_all.sh" script)

Aside from that, I'll need help to verify that my changes preserve the intent of each benchmark.

Keep in mind that this change will allow you (and us) to take advantage of pyperformance for results stability and analysis, as well as posting results to speed.python.org. (You'd have to talk to @pablogsal about the possibility of posting results there.)

So, what do you think? I'd be glad to jump into a call to discuss, if that would help.

add simple pyramid benchmark

opened on 2021-04-26 09:40:49 by undingen None