Model API
How to use our AI models in your application
Of course, our GDPR-compliant LLM models can also be used via an API. For this purpose, we offer a RESTful API that closely follows OpenAI's structure.
Authentication
Authentication is done via an API key, which is inserted into the request header. The API key can be generated in your meinGPT account. For more information, click here.
Chat Completions
Endpoint: POST https://app.meingpt.com/api/openai/v1/chat/completions
The endpoint follows the official OpenAI:
https://platform.openai.com/docs/api-reference/chat/create
Our API doesn't support all parameters that OpenAI offers. Requests will still be accepted and processed, but not all parameters will be considered.
Supported parameters:
model
messages
response_format
stream
temperature
The response is identical to OpenAI's response: https://platform.openai.com/docs/api-reference/chat/object. Both streamed and non-streamed response formats are supported and follow OpenAI's schema.
Supported Models
For the qualities, server location, and GDPR compliance of the models, please refer to the settings in your meinGPT organization.
llama-3.1-sonar-large-128k-online
gpt-4o-mini
gpt-4o
o1-us
o1-mini-us
claude-3-5-sonnet
gemini-1.5-pro
gemini-1.5-flash
SDKs
The meinGPT LLM-API supports the same client SDKs as the official OpenAI API.
https://platform.openai.com/docs/libraries
The base URL must always be specified. Examples:
baseURL
for the Node.JS clientbase_url
for the Python client