政法智言大模型v1.0(Lexipolis-9B-Chat)

模型简介

政法智言大模型v1.0 (Lexipolis-9B-Chat) 由清华大学THUDM开源的GLM-4-9B-Chat微调得到。

采用LoRA算法进行微调:微调过程包括增量预训练与指令微调,其中增量训练使用的数据集包含涉政新闻、工作汇报、国家法律法规条文、领导人讲话语录、政法课程教材、公文、国家部门章程,数据量约1GB。

指令微调使用的数据包含案件审判(案件事实与审判结果对齐)、法条引用(案件事实与法条引用对齐)、指令理解语料(撰写“涉政新闻”的指令与涉政新闻内容对齐等),数据量约550MB。

项目团队人员

齐鲁工业大学(山东省科学院)计算机科学与技术学部 杜宇([email protected]


Lexipolis-9B-Chat (v1.0) Model Overview

Lexipolis-9B-Chat version 1.0 is fine-tuned from Tsinghua University's THUDM's GLM-4-9B-Chat.

It employs the LoRA algorithm for fine-tuning, which includes incremental pre-training and instruction tuning. The incremental training uses a dataset comprising political news, work reports, national laws and regulations, speeches from leaders, political and legal textbooks, official documents, and regulations from national departments, with a data volume of about 1GB.

The instruction tuning uses data that includes case trials (aligning case facts with trial results), legal citation (aligning case facts with legal citations), and instruction understanding corpus (aligning instructions for writing "political news" with the content of the news, etc.), with a data volume of about 550MB.

Project Team

Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences) Du Yu ([email protected])


Visitor Statistics

Number of Total Visits (All of Duyu09's GitHub Projects):
Number of Total Visits (Lexipolis LLM):
Downloads last month
16
Safetensors
Model size
9.4B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Duyu/Lexipolis-9B-Chat

Finetuned
(7)
this model

Datasets used to train Duyu/Lexipolis-9B-Chat