Skip to content

Contribute

Version Python versions MIT license Pull requests welcome

Publish to PyPI Run tests Coverage

This page details the workflow to contribute to the fair-test library.

📥 Install for development

Clone the repository and go in the project folder:

git clone https://github.com/MaastrichtU-IDS/fair-test
cd fair-test

To install the project for development you can either use venv to create a virtual environment yourself, or use hatch to automatically handle virtual environments for you.

Install Hatch, this will automatically handle virtual environments and make sure all dependencies are installed when you run a script in the project:

pip install hatch
Optionally you can improve hatch terminal completion

See the official documentation for more details. For ZSH you can run these commands:

_HATCH_COMPLETE=zsh_source hatch > ~/.hatch-complete.zsh
echo ". ~/.hatch-complete.zsh" >> ~/.zshrc

Create the virtual environment in the project folder :

python3 -m venv .venv

Activate the virtual environment:

source .venv/bin/activate

Install all dependencies required for development:

pip install -e ".[dev,doc,test]"

You can also enable automated formatting of the code at each commit:

pre-commit install

🧑‍💻 Development workflow

Deploy the FAIR test API defined in the example folder to test your changes:

hatch run dev

The code will be automatically formatted when you commit your changes using pre-commit. But you can also run the script to format the code yourself:

hatch run fmt

Or check the code for errors:

hatch run check

Deploy the FAIR test API defined in the example folder to test your changes:

./scripts/dev.sh

The code will be automatically formatted when you commit your changes using pre-commit. But you can also run the script to format the code yourself:

./scripts/format.sh

Or check the code for errors:

./scripts/check.sh

✅ Run the tests

Run tests Coverage

Tests are automatically run by a GitHub Actions workflow when new code is pushed to the GitHub repository. The subject URLs to test and their expected score are retrieved from the test_test attribute for each metric test.??? success “Install pytest for testing”

If not already done, define the 2 files required to run the tests

It will test all cases defined in your FAIR metrics tests test_test attributes:

tests/conftest.py
def pytest_addoption(parser):
    parser.addoption("--metric", action="store", default=None)

and:

tests/test_metrics.py
import pytest
from fastapi.testclient import TestClient
from main import app

endpoint = TestClient(app)

def test_api(pytestconfig):
    app.run_tests(endpoint, pytestconfig.getoption('metric'))

Run the tests locally:

hatch run test

You can also run the tests only for a specific metric test:

hatch run test --metric a1-metadata-protocol

Run the tests locally:

./scripts/test.sh

You can also run the tests only for a specific metric test:

./scripts/test.sh --metric a1-metadata-protocol

📖 Generate docs

The documentation (this website) is automatically generated from the markdown files in the docs folder and python docstring comments, and published by a GitHub Actions workflow.

Serve the docs on http://localhost:8008

hatch run docs
./scripts/docs-serve.sh

🏷️ Publish a new release

Publish to PyPI

  1. Increment the __version__ in fair_test/__init__.py
  2. Push to GitHub
  3. Create a new release on GitHub
  4. A GitHub Action workflow will automatically publish the new version to PyPI