Translator OpenPredict ๐Ÿ”ฎ๐Ÿ

Translator API to compute and serve predictions of biomedical concepts associations

Test production API Run tests CodeQL analysis

Python versions Version SonarCloud Coverage SonarCloud Maintainability Rating

OpenPredict is a Python library and API to train and serve predicted biomedical entities associations (e.g. disease treated by drug).

Metadata about runs, models evaluations, features are stored using the ML Schema ontology in a RDF triplestore (such as Ontotext GraphDB, or Virtuoso).

Access the Translator OpenPredict API at https://openpredict.semanticscience.org ๐Ÿ”ฎ๐Ÿ

You can use this API to retrieve predictions for drug/disease, or add new embeddings to improve the model.

Deploy the OpenPredict API locally :woman_technologist:

Requirements: Python 3.6+ and pip installed

You can install the openpredict python package with pip to run the OpenPredict API on your machine, to test new embeddings or improve the library.

We currently recommend to install from the source code master branch to get the latest version of OpenPredict. But we also regularly publish the openpredict package to PyPI: https://pypi.org/project/openpredict

Install from the source code :inbox_tray:

Clone the repository:

git clone https://github.com/MaastrichtU-IDS/translator-openpredict.git
cd translator-openpredict

Install openpredict from the source code, the package will be automatically updated when the files changes locally :arrows_counterclockwise:

pip3 install -e .

Optional: isolate with a Virtual Environment

If you face conflicts with already installed packages, then you might want to use a Virtual Environment to isolate the installation in the current folder before installing OpenPredict:

# Create the virtual environment folder in your workspace
python3 -m venv .venv
# Activate it using a script in the created folder
source .venv/bin/activate

On Windows you might also need to install Visual Studio C++ 14 Build Tools (required for numpy)

Start the OpenPredict API :rocket:

Start locally the OpenPredict API on http://localhost:8808

openpredict start-api

By default all data are stored in the data/ folder in the directory were you used the openpredict command (RDF metadata, features and models of each run)

Contributions are welcome! If you wish to help improve OpenPredict, see the instructions to contribute :woman_technologist:

Reset your local OpenPredict data :wastebasket:

You can easily reset the data of your local OpenPredict deployment by deleting the data/ folder and restarting the OpenPredict API:

rm -rf data/

If you are working on improving OpenPredict, you can explore additional documentation to deploy the OpenPredict API locally or with Docker.

Test the OpenPredict API

See the TESTING.md file for more details on testing the API.


Use the APIโ€‹ :mailbox_with_mail:

The user provides a drug or a disease identifier as a CURIE (e.g. DRUGBANK:DB00394, or OMIM:246300), and choose a prediction model (only the Predict OMIM-DrugBank classifier is currently implemented).

The API will return predicted targets for the given drug or disease:

Feel free to try the API at openpredict.semanticscience.org

TRAPI operations

Operations to query OpenPredict using the Translator Reasoner API standards.

Query operation

The /query operation will return the same predictions as the /predict operation, using the ReasonerAPI format, used within the Translator project.

The user sends a ReasonerAPI query asking for the predicted targets given: a source, and the relation to predict. The query is a graph with nodes and edges defined in JSON, and uses classes from the BioLink model.

You can use the default TRAPI query of OpenPredict /query operation to try a working example.

Predicates operation

The /predicates operation will return the entities and relations provided by this API in a JSON object (following the ReasonerAPI specifications).

Try it at https://openpredict.semanticscience.org/predicates

Notebooks examples :notebook_with_decorative_cover:

We provide Jupyter Notebooks with examples to use the OpenPredict API:

  1. Query the OpenPredict API
  2. Generate embeddings with pyRDF2Vec, and import them in the OpenPredict API

Add embedding :station:

The default baseline model is openpredict-baseline-omim-drugbank. You can choose the base model when you post a new embeddings using the /embeddings call. Then the OpenPredict API will:

  1. add embeddings to the provided model
  2. train the model with the new embeddings
  3. store the features and model using a unique ID for the run (e.g. 7621843c-1f5f-11eb-85ae-48a472db7414)

Once the embedding has been added you can find the existing models previously generated (including openpredict-baseline-omim-drugbank), and use them as base model when you ask the model for prediction or add new embeddings.

Predict operation :crystal_ball:

Use this operation if you just want to easily retrieve predictions for a given entity. The /predict operation takes 4 parameters (1 required):

The API will return the list of predicted target for the given entity, the labels are resolved using the Translator Name Resolver API:

{
  "count": 300,
  "hits": [
    {
      "score": 0.8361061489249737,
      "id": "OMIM:246300",
      "label": "leprosy, susceptibility to, 3",
      "type": "disease"
    }
  ]
}

Try it at https://openpredict.semanticscience.org/predict?drug_id=DRUGBANK:DB00394


More about the data model :minidisc:

Diagram of the data model used for OpenPredict, based on the ML Schema ontology (mls):

OpenPredict datamodel


Acknowledgmentsโ€‹

Funded the the NIH NCATS Translator project