Services available at IDS

The following services are hosted at the Institute of Data Science. Feel free to contact us if you want to make use of any of them, or are interested in deploying your own services on IDS servers.

Data Science Research Infrastructure#

The Data Science Research Infrastructure (DSRI) is a distributed and scalable infrastructure to run Data Science experiments. It enables you to run any workflow and service using Docker containers, on servers with 512 GB RAM, and 128 CPU cores per server.

Additionally the DSRI also enables you to deploy various popular Data Science applications in a few clicks, to develop resource consuming experiments:

  • Multiple flavors of JupyterLab (scipy, tensorflow, all-spark, java kernel and more), and JupyterHub with GitHub authentication
  • RStudio, with a complementary Shiny server
  • VisualStudio Code server
  • Tensorflow, or PyTorch on Nvidia GPU (with JupyterLab or VisualStudio Code)
  • SQL databases (MariaDB, MySQL, PostgreSQL)
  • NoSQL databases (MongoDB, Redis)
  • Graph databases (GraphDB, Blazegraph, Virtuoso)

Distributed computing can also be run using our Apache Spark or Apache Flink clusters.

Request an account

You can learn more about how to request an account on the DSRI on its website: https://maastrichtu-ids.github.io/dsri-documentation

GraphDB triplestore database#

You can create an account, and request us to be granted permissions to create new repositories.

Each repository acts like an isolated triplestore database coming with various features:

  • Public or private SPARQL endpoints publicly accessible from anywhere on the web (CORS has been enabled, so you can easily query the SPARQL endpoint using JavaScript)
  • Easy login to update data using a username/password combination
  • Multiple inference rulesets available, such as RDFS and OWL. You can also upload any custom ruleset.
  • Automated SHACL Shapes validation (GraphDB will validate all triples uploaded against the provided SHACL shapes)
  • User friendly web UI to manage your repository: import RDF files, delete graphs, define new prefix/namespaces
  • The repository deployed is a RDF4J repository, enabling all features from the RDF4J stack (such as the RDF4J API to manage the triplestore).
  • Search index can be easily enabled to make text search queries much faster.
More documentation about triplestores

Visit the data2services documentation website for more details on storage options for RDF data.

Web Protégé server#

Access Protégé

We host our own instance of web Protégé at https://protege.semanticscience.org

We enabled some features not enabled in Stanford's web Protégé, such as defining axiom using the OWL Manchester syntax.

  1. Create an account in a minute by providing a username, email and password.

  2. Create a new Ontology project

  3. Start building OWL ontologies! You can also import existing ontologies.

  4. Invite other users to collaborate on your ontology, or make your ontology accessible/editable by anyone with an account on IDS web Protégé by going to the Share tab.

SOLID server#

Access SOLID server

This server enables you to create SOLID pods hosted on our servers.

  1. Create your account at https://solid.semanticscience.org
  2. Contact us to get your SOLID account enabled.

Nanopublications server#

A Nanopublications server is deployed on IDS servers, and part of the Nanopublications network (which replicates published Nanopublications on multiple nodes). Feel free to use the Nanopublication network to publish small piece of RDF data.

Access Nanopublications server

The Nanopublications server can be accessed at http://server.np.dumontierlab.com

Access Nanopublications grlc API

The GRLC API to query the Nanopublications server can be accessed at http://grlc.np.dumontierlab.com/api/local/local

Access Nanopublications HDT

The Linked Data Fragment interface to access the Nanopublications stored as HDT (compressed RDF) can be accessed at http://ldf.np.dumontierlab.com

Here are some interesting resources for Nanopublications:

  • Nanobench web UI to publish nanopublication using templated web forms.
  • Nanopub Python: Python client for searching, publishing and modifying nanopublications (require Java installed, it makes use of nanopub-java under the hood)
  • Nanopub-java application to publish nanopublication using Java.

FAIR Data Point server#

Access FAIR Data Point

It enables enables data owners to expose their data sets using rich machine-readable metadata through a RESTful web service.

Last updated on by Vincent Emonet