Skip to content

hugging face

Description

Note

More information about the service specification can be found in the Core concepts > Service documentation.

This service uses Hugging Face's model hub API to directly query AI models.

You can choose from any model available on the inference API from the Hugging Face Hub that takes image, audio or text(json) files as input and outputs one of the mentioned types.

This service has two input files:

  • A json file that defines the model you want to use, your access token and the input/output types you expect.
  • A zip file containing the input data for the model.

json example:

1
2
3
4
5
6
{
    "api_token": "your_token",
    "api_url": "https://api-inference.huggingface.co/models/deepset/roberta-base-squad2",
    "input_type": "application/json",
    "output_type": "application/json"
}

This specific model, "roberta-base-squad2", was trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.

The input looks like this:

1
2
3
4
5
6
{
   "inputs": {
      "question":"What is my name?",
      "context":"My name is Clara Postlethwaite and I live in Berkeley."
   }
}

This is an example, check the model hub to see what the input of the model you want to use looks like. Don't forget to compress it before!


The API documentation for this service is automatically generated by FastAPI using the OpenAPI standard. A user-friendly interface provided by Swagger is available under the /docs route, where the endpoints of the service are described.

Environment variables

Check the Core concepts > Service > Environment variables documentation for more details.

Run the tests with Python

Check the Core concepts > Service > Run the tests with Python documentation for more details.

Start the service locally

Check the Core concepts > Service > Start the service locally documentation for more details.