-
GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
Paper • 2507.01006 • Published • 193 -
THUDM/GLM-4.1V-9B-Thinking
Image-Text-to-Text • 10B • Updated • 58.6k • • 657 -
THUDM/GLM-4.1V-9B-Base
Image-Text-to-Text • 10B • Updated • 3.72k • 33 -
24
GLM-4.1V-9B-Thinking-API-Demo
🚀THUDM/GLM-4.1V-9B-Thinking Demo
AI & ML interests
AGI, LLMs, ChatGLM
Recent Activity
The Knowledge Engineering Group (KEG) & Data Mining (THUDM) at Tsinghua University.
We build the ChatGLM family of LLMs, develop LLMs as Agents, and release related LLM training & inference techniques:
- GLM-4, CodeGeeX, CogVLM (VisualGLM), WebGLM, GLM-130B, CogView, CogVideo && CogVideoX.
- CogAgent, AutoWebGLM, AgentTuning, APAR.
We also work on LLM evaluations: AgentBench, AlignBench, LongBench, NaturalCodeBench.
We also pre-train graph neural networks: GraphMAE, GPT-GNN, GCC, SelfKG, CogDL.
We also work on graph embedding theory, algorithms, and systems: SketchNE, ProNE, NetSMF, NetMF.
We started with social networks and graphs, and always love them: AMiner.
-
GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
Paper • 2507.01006 • Published • 193 -
THUDM/GLM-4.1V-9B-Thinking
Image-Text-to-Text • 10B • Updated • 58.6k • • 657 -
THUDM/GLM-4.1V-9B-Base
Image-Text-to-Text • 10B • Updated • 3.72k • 33 -
24
GLM-4.1V-9B-Thinking-API-Demo
🚀THUDM/GLM-4.1V-9B-Thinking Demo
spaces
12
GLM-4.1V-9B-Thinking-API-Demo
THUDM/GLM-4.1V-9B-Thinking Demo
GLM-4.1V-9B-Thinking-Demo
THUDM/GLM-4.1V-9B-Thinking Demo
CogVideoX-5B
Text-to-Video
CogVideoX-2B
Text-to-Video
MotionBench Leaderboard
Submit and view model evaluations on a leaderboard
LVBench Leaderboard
Submit model evaluations to a leaderboard
models
114

THUDM/glm-4-9b-chat-1m

THUDM/SWE-Dev-9B

THUDM/SWE-Dev-7B

THUDM/SWE-Dev-32B

THUDM/GLM-4.1V-9B-Thinking

THUDM/GLM-4.1V-9B-Base

THUDM/androidgen-llama-3-70b

THUDM/androidgen-glm-4-9b

THUDM/cogvlm2-llama3-caption
