Testing plan for the OpenPredict API published at https://openpredict.semanticscience.org
Testing of the Translator OpenPredict API is separated in 3 parts:
masterbranch. This allows us to prevent deploying the OpenPredict API if the changes added broke some of its features
When one of those 3 workflows fails we take action to fix the source of the problem.
Requirementsm to run the tests: Python 3.6+
Install the required dependency if you want to run the tests locally:
pip install pytest
Integration tests are run automatically by the GitHub Action workflow
.github/workflows/run-tests-prod.yml everyday at 01:00am GMT+1 on the OpenPredict production API
We test for an expected number of results and a few specific results.
/queryTRAPI operation by requesting:
/predictBioThings API operation by requesting:
To run the tests of the OpenPredict production API locally:
Integration tests on a local API are run automatically by the GitHub Action workflow
.github/workflows/run-tests.yml at each push to the
We test the embeddings computation with a Spark local context (setup with a GitHub Action), and without Spark context (using NumPy and pandas)
You can run the tests for the different components of OpenPredict locally:
To run a specific test in a specific file, and display
print() lines in the output:
pytest tests/integration/test_openpredict_api.py::test_post_trapi -s
At each new release we run the GitHub Action workflow
.github/workflows/publish-docker.yml to test the deployment of the OpenPredict API in a Docker container, and we publish a new image for each new version of the OpenPredict API.
We run an additional workflow which to check for vulnerabilities using the CodeQL analysis engine.
Facing issue with
pytest install even using virtual environments? Try this solution:
python3 -m pip install -e . python3 -m pip install pytest python3 -m pytest