All Models
Meta: Llama 4 Maverick
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward...
Available Providers (1)
| Provider | Model ID | Input Cost | Output Cost | Context | Max Output | Docs |
|---|---|---|---|---|---|---|
| | meta-llama/llama-4-maverick | $0.15/MTok | $0.60/MTok | 1.0M | 16.4K |
Capabilities
Reasoning
Tool Calling
Attachments
Open Weights
Structured Output