Creating performance tests for pytest using pytest-codspeed
To integrate CodSpeed with your Python codebase, the simplest way is to
pytest-codspeed. This extension
will automatically enable the CodSpeed engine on your benchmarks and allow
reporting to CodSpeed.
Creating benchmarks with pytest-codspeed is backward compatible with the
pytest-benchmark API. So if you already have benchmarks written with it, you
can start using CodSpeed right away!
In a nutshell, pytest-codspeed offers two approaches to create performance benchmarks
that integrate seamlessly with your existing test suite.Use @pytest.mark.benchmark to measure entire test functions automatically:
Copy
import pytestfrom statistics import median@pytest.mark.benchmarkdef test_median_performance(): input = [1, 2, 3, 4, 5] output = sum(i**2 for i in input) assert output == 55
Since this measure the entire function, you might want to use the benchmark fixture for precise control over what code gets measured:
Copy
def test_mean_performance(benchmark): data = [1, 2, 3, 4, 5] # Only the function call is measured result = benchmark(lambda: sum(i**2 for i in data)) assert result == 55
Check out the full documentation for more details:
If you want to run the benchmarks tests locally, you can use the --codspeed
pytest flag:
Copy
$ pytest tests/ --codspeed======================== test session starts =========================platform linux -- Python 3.10.4, pytest-7.1.3, pluggy-1.0.0codspeed: 1.0.4NOTICE: codspeed is enabled, but no performance measurement will bemade since it's running in an unknown environment.rootdir: /home/user/codspeed-test, configfile: pytest.iniplugins: codspeed-1.0.4collected 6 itemstests/test_iterative_fibo.py . [ 16%]tests/test_recursive_fibo.py .. [ 50%]tests/test_recursive_fibo_cached.py ... [100%]========================= 6 benchmark tested ================================================== 6 passed in 0.02s =========================
Running pytest-codspeed locally will not produce any performance reporting.
It’s only useful for making sure that your benchmarks are working as expected.
If you want to get performance reporting, you should run the benchmarks in
your CI.
To generate performance reports, you need to run the benchmarks in your CI. This
allows CodSpeed to automatically run benchmarks and warn you about regressions
during development.
If you want more details on how to configure the CodSpeed action, you can check
out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main branch and every
pull request:
.github/workflows/codspeed.yml
Copy
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: "3.13" - name: Install dependencies run: pip install -r requirements.txt - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: pytest tests/ --codspeed token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
To parallelize your benchmarks, you can use
pytest-test-groups, a
pytest plugin that allows you to split your benchmark execution across several
CI jobs.Install pytest-test-groups as a development dependency:
Copy
uv add --dev pytest-test-groups
Update your CI workflow to run benchmarks shard by shard:
Copy
jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest strategy: matrix: shard: [1, 2, 3, 4, 5] steps: - uses: actions/checkout@v4 - name: Install uv uses: astral-sh/setup-uv@v5 - name: Set up Python uses: actions/setup-python@v5 with: python-version-file: pyproject.toml - name: Install dependencies run: uv sync --all-extras --dev - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: uv run pytest tests/ --codspeed --test-group=${{ matrix.shard }} --test-group-count=5 token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
The shard number must starts at 1. If you run with a shard number of 0, all
the benchmarks will be run.
Same benchmark with different variationsFor now, you cannot run the same benchmarks several times within the same run.
If the same benchmark is run multiple times, you will receive the following
comment on your pull request:
If you cannot split your benchmarks across multiple CI jobs, you can split them
across multiple processes in the same job. We only recommend this as an
alternative to the parallel CI jobs setup.pytest-codspeed is compatible with
pytest-xdist, a pytest plugin
allowing to distribute the execution across multiple processes. You can simply
enable the pytest-xdist plugin on top of pytest-codspeed. This will allow
you to run your benchmarks in parallel using multiple processes.First, install pytest-xdist as a development dependency:
Copy
uv add --dev pytest-xdist
Then, you can run your benchmarks in parallel with the pytest-xdist flag:
Copy
pytest tests/ --codspeed -n auto
The change in the CI workflow would look like this:
.github/workflows/codspeed.yml
Copy
- name: Run benchmarks uses: CodSpeedHQ/action@v4 with: token: ${{ secrets.CODSPEED_TOKEN }} mode: instrumentation run: pytest tests/ --codspeed -n auto
It’s possible to use pytest-codspeed with
Nox, a Python automation tool that allows
you to automate the execution of Python code across multiple environments.Here is an example configuration file to run benchmarks with pytest-codspeed
using Nox:
To use it with Github Actions, you can use the following workflow:
.github/workflows/codspeed.yml
Copy
name: CodSpeed Benchmarkson: push: branches: - "main" # or "master" pull_request: # `workflow_dispatch` allows CodSpeed to trigger backtest # performance analysis in order to generate initial data. workflow_dispatch:jobs: benchmarks: name: Run benchmarks runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: "3.13" - name: Install Nox run: pip install nox - name: Install dependencies run: nox --sessions codspeed --install-only - name: Run the benchmarks uses: CodSpeedHQ/action@v4 with: mode: instrumentation run: nox --sessions codspeed --reuse-existing-virtualenvs --no-install token: ${{ secrets.CODSPEED_TOKEN }} # optional for public repos
Splitting the virtualenv installation and the execution of the benchmarks is
optional. Though this allows to speed up the execution of the benchmarks since
the dependencies will be installed or compiled without the instrumentation
enabled.