Back
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
- Creator: Mistral
- Context: 32k
Provider | Input $/1M 0.3 | Output $/1M 1 |
Related Models
Model | Provider | Input $/1M | Output $/1M |
---|---|---|---|
0.7 | 0.7 | ||
0.27 | 0.27 |
mixtral-8x7b-instruct-v0.1 Pricing Calculator
Calculate by
- $0.0011 Per Call
- $0.11 Total