pull down to refresh

This reasoning AI model is designed for fast coding. While the developers acknowledge its high speed and ability to find bugs faster than competing models, they also caution that if queries are too vague, the model will offer overly simplified solutions.
Grok Code Fast 1 has 314 billion parameters, is built on the Mixture of Experts (MoE) architecture and can process up to 262,000 tokens of context.
Access: Free on GitHub Copilot until 2 September, then available to Pro, Pro+, Business and Enterprise subscribers.
  • a week of free testing in Cursor;
  • unlimited in Windsurf for Pro and Teams subscribers.
It costs $0.20 for one million input tokens and $1.50 for one million weekend tokens.
modelwindow$/mtok in$/mtok out
grok-code-fast256k0.201.00
gpt-5-mini400k0.252.00
claude-haiku-3.5200k0.804.00
qwen3-coder-flash256k0.804.00
reply