Grok 4.20 — Industry-leading speed with 2M context
xAI · Grok

Grok 4.20

High-speed agentic xAI model with 2M context

Grok 4.20 combines industry-leading speed with strong agentic tool-calling and a giant 2M-token context window. A Multi-agent variant powers swarms of cooperating Groks.

Key features

Grok 4.20 · AI Models

Context window 2M tokens
Max output 128K tokens
Released March 31, 2026
Pricing xAI API
Key features

Grok 4.20

Grok 4.20 combines industry-leading speed with strong agentic tool-calling and a giant 2M-token context window. A Multi-agent variant powers swarms of cooperating Groks.

Key features

  • Industry-leading inference speed.
  • Native agentic tool-calling at scale.
  • 2M-token context window.
  • Multi-agent variant for cooperative agent workflows.
Best for

Best for

Use Grok 4.20 when you need very long context plus fast tool-using agents, and especially for multi-agent orchestration.

Frequently Asked Questions

What is the Multi-agent variant of Grok 4.20?

Grok 4.20 Multi-agent enables cooperative swarms of Grok instances to work in parallel, sharing information and dividing complex tasks automatically for faster, broader coverage.

How large is Grok 4.20's context window?

Grok 4.20 has a 2M-token context window — one of the largest in the industry — making it ideal for analysing very long documents, large codebases or extended conversation histories.

Open Chat

Grok 4.20 combines industry-leading speed with strong agentic tool-calling and a giant 2M-token context window. A Multi-agent variant powers swarms of cooperating Groks.

Open Chat