qwen3.5-9b-glm5.1-distill-v1
# 🪐 Qwen3.5-9B-GLM5.1-Distill-v1
## 📌 Model Overview
**Model Name:** `Jackrong/Qwen3.5-9B-GLM5.1-Distill-v1`
**Base Model:** Qwen3.5-9B
**Training Type:** Supervised Fine-Tuning (SFT, Distillation)
**Parameter Scale:** 9B
**Training Framework:** Unsloth
This model is a distilled variant of **Qwen3.5-9B**, trained on high-quality reasoning data derived from **GLM-5.1**.
The primary goals are to:
- Improve **structured reasoning ability**
- Enhance **instruction-following consistency**
- Activate **latent knowledge via better reasoning structure**
## 📊 Training Data
### Main Dataset
- `Jackrong/GLM-5.1-Reasoning-1M-Cleaned`
- Cleaned from the original `Kassadin88/GLM-5.1-1000000x` dataset.
- Generated from a **GLM-5.1 teacher model**
- Approximately **700x** the scale of `Qwen3.5-reasoning-700x`
- Training used a **filtered subset**, not the full source dataset.
### Auxiliary Dataset
- `Jackrong/Qwen3.5-reasoning-700x`
...