happyme531's picture
Update README.md
b38a068 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - text-generation
language:
  - zh
size_categories:
  - n<1K

English Readme see below

(README由Claude 3.5 Sonnet生成)

中国古诗词大模型评测基准

简介

这是一个专门用于评测大语言模型在中国古诗词理解和生成方面能力的基准测试集。该基准包含了一个多样化的测试数据集和完整的评测框架,可用于系统性地评估和比较不同模型在古诗词领域的表现。

数据集说明

数据集(poetry_benchmark.jsonl)包含70个测试样本,涵盖以下维度:

  • 题型分布:

    • 对联补全
    • 诗句填空
    • 诗词识别
    • 提示词补全
    • 首尾互补
  • 难度等级:

    • 简单(easy)
    • 中等(medium)
    • 困难(hard)
  • 朝代覆盖:

    • 先秦至近现代
    • 包括唐、宋、元、明、清等重要朝代

评测维度

评测框架从以下维度对模型进行全面评估:

  1. 整体准确率
  2. 不同题型的表现
  3. 不同难度等级的表现
  4. 不同朝代诗词的掌握程度

评测结果

模型 blank_filling couplet find_poetry first_last_words hint_words 总计 总正确率
deepseek-chat 7/15 9/20 0/15 7/10 0/10 23/70 32.86%
yi-lightning 6/15 10/20 0/15 4/10 2/10 22/70 31.43%
qwen2.5-72b 6/15 7/20 0/15 4/10 0/10 17/70 24.29%
gpt-4o-2024-08-06 5/15 6/20 0/15 4/10 1/10 16/70 22.86%
gemini-1.5-flash 0/15 4/20 0/15 2/10 0/10 6/70 8.57%

项目结构

.
├── poetry_benchmark.jsonl    # 测试数据集
├── compare_results.py        # 结果对比和可视化脚本
└── evaluation_results/       # 评测结果目录
    ├── evaluation_report.json    # 评测报告
    └── detailed_results.jsonl    # 详细结果

使用方法

  1. 运行评测:
python evaluate_poetry.py --model <model_name> --output_dir evaluation_results
  1. 对比分析:
python compare_results.py
  1. 查看结果: 评测脚本会生成以下可视化结果:
  • accuracy_comparison.html: 整体准确率对比
  • accuracy_by_type.html: 不同题型准确率对比
  • accuracy_by_difficulty.html: 不同难度准确率对比
  • accuracy_by_dynasty.html: 不同朝代准确率对比
  • radar_comparison.html: 多维度能力雷达图

输出格式

评测报告(evaluation_report.json)包含以下内容:

{
    "overall": {
        "total": 70,
        "correct": 45,
        "accuracy": 0.643
    },
    "by_type": {
        "couplet": {...},
        "blank_filling": {...},
        ...
    },
    "by_difficulty": {...},
    "by_dynasty": {...}
}

(README generated by Claude 3.5 Sonnet)

Classical Chinese Poetry Benchmark for LLMs

Introduction

This is a benchmark designed to evaluate large language models' capabilities in understanding and generating Classical Chinese poetry. It includes a diverse test dataset and a comprehensive evaluation framework for systematically assessing and comparing different models' performance in the domain of Classical Chinese poetry.

Dataset Description

The dataset (poetry_benchmark.jsonl) contains 70 test samples covering the following dimensions:

  • Question Types:

    • Couplet completion
    • Poetry line filling
    • Poetry identification
    • Hint-based completion
    • First-last word completion
  • Difficulty Levels:

    • Easy
    • Medium
    • Hard
  • Dynasty Coverage:

    • From Pre-Qin to Modern Era
    • Including Tang, Song, Yuan, Ming, Qing dynasties

Evaluation Dimensions

The framework evaluates models comprehensively across:

  1. Overall accuracy
  2. Performance by question type
  3. Performance by difficulty level
  4. Mastery of poetry from different dynasties

Evaluation Results

Model blank_filling couplet find_poetry first_last_words hint_words Total Total Accuracy
deepseek-chat 7/15 9/20 0/15 7/10 0/10 23/70 32.86%
yi-lightning 6/15 10/20 0/15 4/10 2/10 22/70 31.43%
qwen2.5-72b 6/15 7/20 0/15 4/10 0/10 17/70 24.29%
gpt-4o-2024-08-06 5/15 6/20 0/15 4/10 1/10 16/70 22.86%
gemini-1.5-flash 0/15 4/20 0/15 2/10 0/10 6/70 8.57%

Project Structure

.
├── poetry_benchmark.jsonl    # Test dataset
├── compare_results.py        # Comparison and visualization script
└── evaluation_results.<model_name>/       # Evaluation results directory
    ├── evaluation_report.json    # Evaluation report
    └── detailed_results.jsonl    # Detailed results

Usage

  1. Run evaluation:
python evaluate_poetry.py --model <model_name> --output_dir evaluation_results.<model_name>
  1. Compare results:
python compare_results.py
  1. View results: The script generates the following visualizations:
  • accuracy_comparison.html: Overall accuracy comparison
  • accuracy_by_type.html: Accuracy comparison by question type
  • accuracy_by_difficulty.html: Accuracy comparison by difficulty
  • accuracy_by_dynasty.html: Accuracy comparison by dynasty
  • radar_comparison.html: Multi-dimensional capability radar chart

Output Format

The evaluation report (evaluation_report.json) contains:

{
    "overall": {
        "total": 70,
        "correct": 45,
        "accuracy": 0.643
    },
    "by_type": {
        "couplet": {...},
        "blank_filling": {...},
        ...
    },
    "by_difficulty": {...},
    "by_dynasty": {...}
}