All Models
Mistral Nemo
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese,...
Available Providers (9)
| Provider | Model ID | Input Cost | Output Cost | Context | Max Output | Docs |
|---|---|---|---|---|---|---|
| | mistral-ai/mistral-nemo | $0.00/MTok | $0.00/MTok | 128K | 8.2K | |
| | mistralai/mistral-nemo | $0.04/MTok | $0.17/MTok | 60.3K | 16K | |
| | mistral/mistral-nemo | $0.04/MTok | $0.17/MTok | 60.3K | 16K | |
| | mistralai/Mistral-Nemo-Instruct-2407 | $0.10/MTok | $0.12/MTok | 16.4K | 8.2K | |
| | mistral-nemo | $0.15/MTok | $0.15/MTok | 128K | 128K | |
| | mistral-nemo | $0.15/MTok | $0.15/MTok | 128K | 128K | |
| | mistral-nemo | $0.15/MTok | $0.15/MTok | 128K | 128K | |
| | neuralmagic/Mistral-Nemo-Instruct-2407-FP8 | $0.49/MTok | $0.71/MTok | 128K | 8.2K | |
| | mistral-nemo | $20.00/MTok | $40.00/MTok | 128K | 16.4K |
Capabilities
Reasoning
Tool Calling
Attachments
Open Weights
Structured Output