Quickstart Guide
Get up and running with the LLMAPI.dev unified API in minutes.
Get Your API Key
Before you can start making requests to the LLMAPI.dev API, you'll need to obtain an API key from your dashboard.
Steps to get your API key:
- Sign up for an account at llmapi.dev
- Navigate to your dashboard
- Go to the "API Keys" section
- Click "Create New API Key"
- Copy your API key and store it securely
Install the OpenAI Client
LLMAPI.dev is compatible with the OpenAI SDK, so you can use the official OpenAI client libraries.
pip install openai
Your First Completion
Now let's make your first API call to generate a chat completion. This example shows how to send a simple message and receive a response. Each example includes proper base URL configuration to use LLMAPI's endpoint.
from openai import OpenAI
import os
# Make sure to set your API key in your environment variables
# as LLMAPI_API_KEY, or by using the api_key parameter.
client = OpenAI(
api_key=os.environ.get("LLMAPI_API_KEY"),
base_url="https://api.llmapi.dev/api"
)
response = client.chat.completions.create(
model="mistralai/mistral-7b-instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"}
]
)
print(response.choices[0].message.content)
Streaming Responses
For a better user experience, you can stream responses as they're generated. This allows you to display partial responses in real-time. Each example below includes proper error handling, base URL configuration, and streaming implementation for the respective language.
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ.get("LLMAPI_API_KEY"),
base_url="https://api.llmapi.dev/api"
)
stream = client.chat.completions.create(
model="mistralai/mistral-7b-instruct",
messages=[
{"role": "user", "content": "Write a short story about a robot who discovers music."}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
print()
List Available Models
You can retrieve a list of all available models and their details using the models endpoint. This helps you understand pricing, capabilities, and context limits for each model.
from openai import OpenAI
import os
client = OpenAI(
api_key=os.environ.get("LLMAPI_API_KEY"),
base_url="https://api.llmapi.dev/api"
)
models = client.models.list()
for model in models.data:
print(f"Model: {model.id}")
print(f"Created: {model.created}")
print(f"Owned by: {model.owned_by}")
print("---")
Sample Response
[
{
"id": "openai/gpt-4o",
"name": "gpt-4o",
"displayName": "GPT-4o",
"description": "GPT-4o is OpenAI's flagship model with vision capabilities, offering high intelligence and efficiency for complex, multi-step tasks.",
"shortDescription": "OpenAI's flagship model with vision capabilities",
"provider": "OpenAI",
"categories": ["text", "vision", "reasoning"],
"isDefault": true,
"isPopular": true,
"isDisabled": false,
"pricing": {
"prompt": 0.0000025,
"completion": 0.00001,
"request": null,
"image": 0.001275,
"webSearch": null,
"internalReasoning": null
},
"supportedParameters": ["temperature", "max_tokens", "top_p", "frequency_penalty", "presence_penalty", "stop"],
"contextLength": 128000
},
{
...
}
]
Response Fields:
- •
displayName
- Human-readable model name - •
provider
- The company that created the model - •
categories
- Model capabilities (text, vision, reasoning) - •
contextLength
- Maximum tokens the model can process - •
pricing
- Cost per token for different operations - •
supportedParameters
- Available configuration options
Next Steps
Great! You've successfully made your first API calls. Here are some recommended next steps to help you build amazing applications with LLMAPI.dev: