OpenAI API
OpenAI offers developer REST, streaming, and realtime APIs including the unified Responses API, legacy Chat Completions, Embeddings, Image generation, Audio (TTS/transcription), and Assistants with Threads/Runs. Base URL is https://api.openai.com/v1 and authentication uses Bearer tokens. Usage and rate limits vary by account tier and model; see official rate limit docs and your account console.
支持模型
API 接口列表
List available models and their metadata.
curl https://api.openai.com/v1/models -H 'Authorization: Bearer $OPENAI_API_KEY'
Unified Responses API for conversational and tool-use generation tasks.
{
"model": "gpt-4.1",
"input": "Say hello to the world"
}Chat Completions (legacy) supporting message arrays and streaming.
{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Give me a quick example."
}
],
"stream": true
}Create text embeddings for retrieval and similarity operations.
{
"model": "text-embedding-3-large",
"input": "The quick brown fox jumps over the lazy dog"
}Image generation (gpt-image-1 / DALL·E) with size and output format options.
{
"model": "gpt-image-1",
"prompt": "A cute baby sea otter",
"n": 1,
"size": "1024x1024"
}Audio transcription (Whisper / 4o-transcribe family); upload audio and receive text.
{
"model": "whisper-1",
"file": "@/path/to/audio.mp3"
}Create a Run on a given Thread for Assistants workflow execution.
{
"assistant_id": "asst_XXXX",
"instructions": "Answer with concise bullet points."
}