
DeepSeek Leaks MODEL1 - New Flagship AI Shocks The Industry
DeepSeek's "Model 1" Leak and Potential V4 Release with significant technical changes and breakthrough in AI technology
DeepSeek's "Model 1" Leak and Potential V4 Release
New information suggests DeepSeek may soon unveil its next flagship AI model. On January 21, 2026, developers noticed that DeepSeek updated 114 files in its Flash MLA code on GitHub. These updates introduced a new identifier, "Model 1", alongside the existing V3.2, implying a distinct next-generation architecture rather than a minor update.
Technical Changes
Code analysis reveals significant technical changes, including:
- Redesigned KV cache layout: A fundamental architectural improvement for memory efficiency
- Differences in sparsity handling: Enhanced performance optimization
- Explicit support for FP8 decoding: Focus on extreme efficiency and scale
Speculation suggests this could be DeepSeek V4, with a potential release around the Lunar New Year (mid-February). The new model is also expected to integrate recent research such as Modified Hierarchical Connections (MHC) and the Engram memory module.
Zhipu AI Releases GLM-4.7 Flash
Zhipu AI (Z.A.I) has released GLM-4.7 Flash, a new 31B parameter Mixture-of-Experts (MoE) model designed for high-performance local deployment. The model supports a 128,000 token context window and is optimized for serious tasks like coding, reasoning, and agentic workflows.
Zhipu AI positions it as a lightweight, potentially free-tier option that remains competitive with larger models. Benchmarks show it performing strongly against competitors like Qwen-32B, making it a viable option for users who need state-of-the-art coding and reasoning capabilities without requiring massive GPU clusters.
Breakthrough in Computational Emotion AI
Researchers from Japan (NAIST and Osaka University) have published a study on emotion computation based on the Theory of Constructed Emotion. Unlike traditional AI that simply categorizes expressions, this system models emotion as a process combining internal bodily signals (interoception) and external sensory inputs.
Using a model called Multi-layered Multimodal Latent Dirichlet Allocation (MMLDA), the AI learned to identify emotion categories from unlabeled data (including heart rate, visual inputs, and language) with 75% agreement with human self-reported emotions. This technology could revolutionize mental health support and human-robot interaction by enabling AI to understand the formation of emotions.
Nous Research Launches NousCoder-14B
Nous Research has released NousCoder-14B, a specialized model for competitive programming. Built on top of Qwen-3 14B, it was trained using reinforcement learning (RL) where code is executed in a sandbox environment.
The model is rewarded only if the code passes all hidden tests and penalized for failure, strict time limits, or memory overuse. This "survival of the fittest" training allowed NousCoder-14B to achieve a 67.87% Pass@1 score on LiveCodeBench, a significant jump of over 7 percentage points over the base model. It demonstrates the power of verifiable rewards and execution-based training for complex reasoning tasks.
More Posts

DeepSeek V4 - New Flagship Poised to Outperform OpenAI in Coding
DeepSeek V4 coding powerhouse and the distinction between V-Series and R-Series models

DeepSeek V4 Leak - The "Code-First" Model That Changes Everything
DeepSeek V4 featuring revolutionary Engram architecture and code-first design philosophy