GLM-5.1 Open Source AI Model
GLM-5.1 is a new open-source AI model that has achieved top-tier performance, ranking #1 in open-source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. It can run autonomously for 8 hours and has been shown to outperform Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro on SWE-Bench Pro. GLM-5.1 can now be run locally and has been open-sourced, with a significantly reduced size and cost compared to its predecessor.
Engagement Score Over Time
Top Posts
Introducing GLM-5.1: The Next Level of Open Source
- Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo.
- Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations.

- what


GLM-5.1 can now be run locally!
GLM-5.1 is a new open model for SOTA agentic coding & chat.
We shrank the 744B model from 1.65TB to 220GB (-86%) via Dynamic 2-bit.
Runs on a 256GB Mac or RAM/VRAM setups.
Guide: https:// unsloth.ai/docs/models/gl m-5.1 ⦠GGUF: https:// huggingface.co/unsloth/GLM-5. 1-GGUF ā¦


we open-sourced glm-5.1 agents could do about 20 steps by the end of last year. glm-5.1 can do 1,700 rn. autonomous work time may be the most important curve after scaling laws. glm-5.1 will be the first point on that curve that the open-source community can verify with their own

Can't stop thinking about how Claude Code is in LAST PLACE on TerminalBench for harnesses using Opus 4.6.
There are TEN separate harnesses that use Opus better than Claude Code

Narrative Momentum
Signal History
ć°ļø "GLM-5.1 Open Source AI Model" wave is slowing but still growing ā up 18.6%
š "GLM-5.1 Open Source AI Model" wave is surging ā up 55.8%
ć°ļø "GLM-5.1 Open Source AI Model" wave is slowing but still growing ā up 29.5%
š "GLM-5.1 Open Source AI Model" wave is surging ā up 89.8%