DeepSeek has released a preview version of its V4 series of open-source models, licensed under the MIT license. Weights are already available on Hugging Face and ModelScope. The series includes two MoE models: V4-Pro with approximately 1.6 trillion parameters and 49 billion parameters activated per token, and V4-Flash with 284 billion parameters and 13 billion activated parameters, both supporting 1 million token contexts. The official statement indicates that compared to version V3.2, it significantly reduces memory usage and computational overhead in long text inference.