API Getting Started
Copy-paste code snippets for every major AI provider. Python, JavaScript, and cURL — tested and working. Pick a provider and start building.
OpenAI
GPT-4o, GPT-4.1, o3, o4-mini
Setup
Install:
pip install openai JavaScript:
npm install openai Get your API key: platform.openai.com/api-keys
from openai import OpenAI
client = OpenAI() # uses OPENAI_API_KEY env var
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in two sentences."}
]
)
print(response.choices[0].message.content) import OpenAI from "openai";
const client = new OpenAI(); // uses OPENAI_API_KEY env var
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain quantum computing in two sentences." }
],
});
console.log(response.choices[0].message.content); curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in two sentences."}
]
}' OpenAI uses the standard Chat Completions format. Most third-party tools and frameworks support it natively. For reasoning models (o3, o4-mini), omit the system message and use the "developer" role instead.
Full documentation →Anthropic
Claude Opus 4, Sonnet 4, Haiku 3.5
Setup
Install:
pip install anthropic JavaScript:
npm install @anthropic-ai/sdk Get your API key: console.anthropic.com/settings/keys
import anthropic
client = anthropic.Anthropic() # uses ANTHROPIC_API_KEY env var
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
system="You are a helpful assistant.",
messages=[
{"role": "user", "content": "Explain quantum computing in two sentences."}
]
)
print(message.content[0].text) import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic(); // uses ANTHROPIC_API_KEY env var
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
system: "You are a helpful assistant.",
messages: [
{ role: "user", content: "Explain quantum computing in two sentences." }
],
});
console.log(message.content[0].text); curl https://api.anthropic.com/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"system": "You are a helpful assistant.",
"messages": [
{"role": "user", "content": "Explain quantum computing in two sentences."}
]
}' Anthropic uses the Messages API (not Chat Completions). The system prompt is a top-level parameter, not a message role. Extended thinking is available on Claude models — add "thinking": {"type": "enabled", "budget_tokens": 10000} for complex reasoning tasks.
Full documentation →Gemini 2.5 Pro, 2.5 Flash, 2.0 Flash
Setup
Install:
pip install google-genai JavaScript:
npm install @google/genai Get your API key: aistudio.google.com/apikey
from google import genai
client = genai.Client() # uses GOOGLE_API_KEY env var
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="Explain quantum computing in two sentences."
)
print(response.text) import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({}); // uses GOOGLE_API_KEY env var
const response = await ai.models.generateContent({
model: "gemini-2.5-flash",
contents: "Explain quantum computing in two sentences.",
});
console.log(response.text); curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent?key=$GOOGLE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"parts": [{"text": "Explain quantum computing in two sentences."}]
}]
}' Gemini has a generous free tier. The API supports text, images, audio, and video in a single request. Gemini 2.5 models include built-in "thinking" for complex tasks. Use Google AI Studio for quick prototyping.
Full documentation →Ollama (Local)
Llama 3.3, DeepSeek R1, Qwen, Mistral
Setup
Install:
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.3 JavaScript:
npm install ollama import ollama
# Make sure Ollama is running: ollama serve
# Pull a model first: ollama pull llama3.3
response = ollama.chat(
model="llama3.3",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in two sentences."}
]
)
print(response["message"]["content"]) import ollama from "ollama";
// Make sure Ollama is running: ollama serve
// Pull a model first: ollama pull llama3.3
const response = await ollama.chat({
model: "llama3.3",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain quantum computing in two sentences." }
],
});
console.log(response.message.content); # Ollama uses the OpenAI-compatible endpoint locally
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.3",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in two sentences."}
]
}' Ollama runs models locally on your machine — no API key needed, no data leaves your device. It supports the OpenAI-compatible API format, so most tools that work with OpenAI work with Ollama too. Requires a decent GPU (8GB+ VRAM) for larger models.
Full documentation →