Back

DeepSeek-V2
deepseek-v2

DeepSeek-V2 is an efficient and cost-effective large-scale mixed-expertise (MoE) language model with 236B parameters, 21B of which are activated on each token, that performs well in several standard benchmarks while reducing training costs and memory footprint during inference.
Provider

DeepSeek

Input $/1M

0.14

Output $/1M

0.28

DeepSeek-V2 Pricing Calculator

Calculate by

  • $0.0003 Per Call
  • $0.03 Total