LsTam's picture
Upload dataset
6a92972
metadata
language:
  - fr
  - en
license: cc-by-sa-3.0
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: eval
        path: data/eval-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: instruction
      dtype: string
    - name: context
      dtype: string
    - name: category
      dtype: string
    - name: fr_context
      dtype: string
    - name: fr_response
      dtype: string
    - name: fr_instruction
      dtype: string
    - name: response
      dtype: string
    - name: qid
      dtype: int64
  splits:
    - name: train
      num_bytes: 9785049.205202311
      num_examples: 3300
    - name: eval
      num_bytes: 1340255.2244701348
      num_examples: 452
    - name: test
      num_bytes: 1186066.570327553
      num_examples: 400
  download_size: 7746263
  dataset_size: 12311371

Dataset Card for "dolly_context_enfr"

This is a filtered version of databricks-dolly-15k, then traduced to french with Deepl pro API, the best translation solution available on the market.

Our goal is to gather french data on question answering on context, where the model should not bring new information not present in the context given. Our goal is to limit hallucination. The filtering have been done in three parts:

  • We keep only the data with a not empty context (we are not interested in random chat or not sourced information)
  • We don't take data where the answer is more than 1,5 times longer than the context, our study of the data showed that in those cases the information come from other sources than the context, and/or concist of a copy past of the context
  • For long context data (>1000 characters), we don't take data where the answer is longer than context (character wize)
  • We also filter around 30 data with too long context (10k character), answer (5k character) and instruction (5k character) as ther were showed to have a wrong format

Our filtered version of dolly dataset only contain 3 of the 7 categories, the annotation guidelines for each of the categories were as follows:

  • Closed QA: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
  • Summarization: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
  • Information Extraction: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
Category Samples
closed_qa 1711
information_extraction 1377
summarization 1064

Note that we considered 'brainstorming' and 'classification' data, but there are not suited for our LLM project, and very subjective (as not based on a context), so we decided to not use them.

image/png