Context Window Comparison

42 LLMs ranked by context window size. Larger context means more text can be processed in a single conversation — crucial for analysing long documents, codebases, and research papers.

11

1M+ tokens

12

200K–999K

19

32K–199K

0

<32K tokens

Full Table

# Model Context Quality
1 Gemini 2.5 Pro Google 1.0M 83.0
2 Gemini 2.0 Flash Google 1.0M 81.0
3 Gemini 2.5 Flash Google 1.0M 78.0
4 Gemini 2.5 Flash Lite Google 1.0M 78.0
5 Gemini 2.0 Flash Lite Google 1.0M 76.0
6 Llama 4 Maverick OSS Meta 1.0M 75.0
7 GPT-4.1 OpenAI 1.0M 77.0
8 GPT-4.1 Nano OpenAI 1.0M 75.0
9 Claude Opus 4.6 Anthropic 1M 89.0
10 Claude Sonnet 4.6 Anthropic 1M 86.0
11 Claude Sonnet 4 Anthropic 1M 79.0
12 GPT-5.2 OpenAI 400K 90.0
13 GPT-5 OpenAI 400K 87.0
14 GPT-5 Nano OpenAI 400K 78.0
15 Nova Pro 1.0 Amazon 300K 78.0
16 Nova Lite 1.0 Amazon 300K 72.0
17 Grok 4 xAI 256K 88.0
18 Command A OSS Cohere 256K 82.0
19 O4 Mini OpenAI 200K 90.0
20 O3 OpenAI 200K 88.0
21 O3 Pro OpenAI 200K 88.0
22 Claude Opus 4 Anthropic 200K 84.0
23 Claude 3.5 Haiku Anthropic 200K 82.0
24 DeepSeek V3.2 OSS DeepSeek 163.8K 86.0
25 DeepSeek V3 OSS DeepSeek 163.8K 76.0
26 Qwen3 235B A22B OSS Alibaba 131.1K 87.0
27 Grok 3 Beta xAI 131.1K 85.0
28 Llama 3.3 70B Instruct OSS Meta 131.1K 79.0
29 QwQ 32B OSS Alibaba 131.1K 78.0
30 Mistral Nemo OSS Mistral 131.1K 72.0
31 Mistral Large OSS Mistral 128K 86.0
32 GPT-4o-mini OpenAI 128K 80.0
33 Command R+ (08-2024) OSS Cohere 128K 79.0
34 Mistral Small 3.1 24B OSS Mistral 128K 76.0
35 GPT-4o (extended) OpenAI 128K 75.0
36 Command R (08-2024) OSS Cohere 128K 73.0
37 Nova Micro 1.0 Amazon 128K 68.0
38 Command R7B (12-2024) OSS Cohere 128K 65.0
39 Sonar Perplexity 127.1K 74.0
40 Reka Flash 3 Reka 65.5K 74.0
41 DeepSeek R1 OSS DeepSeek 64K 85.0
42 Qwen2.5 72B Instruct OSS Alibaba 32.8K 71.0

What is a context window?

The context window is the maximum amount of text a model can process in a single request — including both your input and its response. It's measured in tokens (roughly 0.75 words per token).

32K tokens

~24,000 words / ~96 pages. Enough for a long article or short story.

200K tokens

~150,000 words / ~600 pages. Enough for a full novel or large codebase.

1M+ tokens

~750,000+ words / ~3,000+ pages. Entire book series, repos, or document collections.