Back

mixtral-8x7b-instruct-v0.1
mixtral-8x7b-instruct-v0.1

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
Provider

Replicate

Input $/1M

0.3

Output $/1M

1

Related Models

Model Provider Input $/1M Output $/1M

mixtral-8x7b

Mistral

0.7

0.7

mixtral-8x7b

Groq

0.27

0.27

mixtral-8x7b-instruct-v0.1 Pricing Calculator

Calculate by

  • $0.0011 Per Call
  • $0.11 Total