hugging-face-text-to-text¶
Description¶
Note
More information about the service specification can be found in the Core concepts > Service documentation.
This service uses Hugging Face's inference API to query text-to-text AI models.
You can choose from any model available on the inference API from the Hugging Face Hub that takes a text(JSON) as input and outputs text(JSON).
It must take only one JSON input with the following structure:
This service takes two input files:
- A JSON file that defines the model you want to use and your access token.
- A text file.
json example:
In this model example, "gpt2" is used for text generation.
This service creates the JSON payload from the input text and queries the given model. The generated text is returned as a JSON.
The API documentation for this service is automatically generated by FastAPI using the OpenAPI standard. A user-friendly interface provided by Swagger is available under the /docs
route, where the endpoints of the service are described.
Environment variables¶
Check the Core concepts > Service > Environment variables documentation for more details.
Run the tests with Python¶
Check the Core concepts > Service > Run the tests with Python documentation for more details.
Start the service locally¶
Check the Core concepts > Service > Start the service locally documentation for more details.