Create a metric test
This page explains how to create a FAIR metrics tests.
🎯 Define a FAIR metrics test
Create a file in the metrics
folder with your test. Here is a basic example, with explanatory comments, to check if RDF metadata can be found at the subject URI:
from fair_test import FairTest, FairTestEvaluation
class MetricTest(FairTest):
# Define the parameters of the tests
metric_path = 'a1-check-something'
applies_to_principle = 'A1'
title = 'Check something'
description = """Test something"""
# Optional, will use contact infos from the .env file if not provided here
author = 'https://orcid.org/0000-0000-0000-0000'
contact_url="https://github.com/LUMC-BioSemantics/RD-FAIRmetrics"
contact_name="Your Name"
contact_email="your.email@email.com"
organization="The Organization for which this test is published"
# Optional, if your metric test has a detailed readme:
metric_readme_url="https://w3id.org/rd-fairmetrics/RD-F4"
metric_version = '0.1.0'
# You can provide a list of URLs to automatically test
# And the score the test is expected to compute
test_test={
'https://w3id.org/fair-enough/collections': 1,
'http://example.com': 0,
}
# Define the function to evaluate
def evaluate(self, eval: FairTestEvaluation):
# Use the eval object to get the subject of the evaluation
# Or access most functions needed for the evaluation (logs, fail, success)
eval.info(f'Checking something for {eval.subject}')
g = eval.retrieve_metadata(eval.subject, use_harvester=False)
if len(g) > 0:
eval.success(f'{len(g)} triples found, test sucessful')
else:
eval.failure('No triples found, test failed')
return eval.response()
ℹ️ A few common operations are available on the eval
object (a FairTestEvaluation
):
-
Logging operations:
-
Retrieve metadata from a URL (returns a RDFLib Graph, or JSON-like object):
Improve the metadata harvesting workflow
If the retrieve_metadata()
function is missing some use-cases, and you would like to improve it, you can find the code in the fair_test/fair_test_evaluation.py
file. Checkout the Contribute page to see how to edit the fair-test
library.
- Parse a string to RDF:
- Return the metric test results:
- There is also a dictionary
test_test
to define URIs to be automatically tested against each metric, and the expected score. See the Development workflow page for more detail on running the tests.
Documentation for all functions
You can find the details for all functions available in the Code reference section
🥷 Use secrets
You can also securely provide secrets environment variables
It can be useful to pass API keys to use private services in your metrics tests, such as Search engines APIs. In this example we will define an API key to perform Bing search named APIKEY_BING_SEARCH
-
Create an additional
secrets.env
environment file, it should not be committed to git (make sure it is added to the.gitignore
). -
To use the secret in development define the env variable locally in your terminal with:
-
Add this file to your
docker-compose.yml
to use the secrets in production: -
You can then retrieve this API key in your metrics tests: