Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
agxqa_v1 / README.md
eusojk's picture
Update README.md
9fb0cf2 verified
metadata
language:
  - en
license: mit
multilinguality:
  - monolingual
task_categories:
  - question-answering
task_ids:
  - closed-domain-qa
  - extractive-qa
size_categories:
  - 1K<n<10K
source_datasets:
  - original
tags:
  - agriculture
  - Extension
  - agriculture Extension
  - irrigation
pretty_name: AgXQA1.1
dataset_info:
  config_name: agxqa_v1
  features:
    - name: id
      dtype: string
    - name: category
      dtype: string
    - name: context
      dtype: string
    - name: question
      dtype: string
    - name: answers
      sequence:
        - name: text
          dtype: string
        - name: answer_start
          dtype: int32
    - name: references
      dtype: string
  splits:
    - name: train
      num_examples: 1503
    - name: validation
      num_examples: 353
    - name: test
      num_examples: 330
configs:
  - config_name: agxqa_v1
    default: true
    data_files:
      - split: train
        path: agxqa-train-2024-06-11.jsonl
      - split: validation
        path: agxqa-validation-2024-06-11.jsonl
      - split: test
        path: agxqa-test-2024-06-11.jsonl

Dataset Card for AgXQA 1.1

Table of Contents

Dataset Description

Dataset Summary

The Agricultural eXtension Question Answering Dataset (AgXQA 1.1) is a small-scale, SQuAD-like QA dataset targeting the Agriculture Extension domain. Version 1.1 currently contains 2.1K+ questions related to irrigation topics across the US, focusing on the Midwest since our crops of interest were mainly soybean and corn.

Supported Tasks and Leaderboards

Question Answering.

Languages

English (en).

Dataset Structure

Data Instances

agxqa_v1

An example from the 'test' split looks as follows.

Please note that the "context" of this example was too long and was cropped:

{
    "answers": {
        "answer_start": [78, 21],
        "text": [' the rate water can enter the soils surface', 'the quantity of water that can enter the soil in a specified time interval']
    },
    "context": "Irrigation Fact Sheet # 2: Instantaneous Rates. The soils infiltration rate is the rate water can enter the soils surface. Michigan soils...",
    "id": "1170477",
    "question": "what is infiltration rate?",
    "category": "Irrigation",
    "references": "Kelley, L. (2007a). Irrigation Fact Sheet # 2 - Irrigation Application Instantaneous Rates. https://www.canr.msu.edu/uploads/235/67987/FactSheets/2_IrrigationApplicationRates1.30.pdf",
}

Data agxqa_v1

The data fields are the same among all splits.

agxqa_v1

  • id: a string feature.
  • category: a string feature.
  • context: a string feature.
  • question: a string feature.
  • answers: a dictionary feature containing:
    • text: a string feature.
    • answer_start: a int32 feature.
  • references: a string feature.

Data Splits

name train validation test
agxqa_v1 1503 353 330

Dataset Creation

Curation Rationale

The creation of this dataset aims to enhance the performance of NLP models (e.g., LLMs) in understanding and extracting relevant information about agro-hydrological practices for crops such as corn and soybeans.

Scope and Domain

The dataset specifically focuses on irrigation practices, techniques, and related agricultural knowledge concerning corn and soybeans. This includes, but is not limited to:

  • irrigation laws and policies
  • irrigation methods (e.g., drip, sprinkler, furrow),
  • irrigation scheduling,
  • soil moisture monitoring,
  • crop growth stage,
  • crop water requirements,
  • general crop (soybean and corn) characteristics

Source Data

Initial Data Collection and Normalization

About ~600 paragraphs (e.g., context) were extracted from the Agriculture Extension Corpus (AEC1.1). For more details about AEC1.1's data sources, please refer to its dataset card here.

Who are the source language producers?

  • CECO curated and supervised the creation and annotations of the QA pairs.
  • Regarding the original paragraphs/contexts, please see here.

Annotations

Annotation process

We followed the general guidelines described in Rajpurkar et al. (2016), which also inspired us to create a SQUAD-like dataset. We leveraged Deepset's annotation tool to annotate the paragraphs and create the QA pairs.

Our main guidelines can be summarized as follows:

  • Question formulation: Based on the rationale in the paragraph, the extracted questions represented common queries by farmers and agricultural practitioners regarding irrigation.
  • Answer collection: Already present in the paragraph, so the annotations cover both short and long:
    • clauses
    • subjects
    • predicates
    • phrases (nouns, verbs, adjectives and adverbials)
  • Quality control: Domain experts reviewed and validated the QA pairs to ensure accuracy and relevance. This review was conducted weekly on 50% of the annotated batch (randomly selected) for that week. Diversity and Coverage: Since the crops of interest (soybeans and corn) are mostly grown in the Midwest states of the USA, most of the QA pairs cover those states. However, the dataset also includes general irrigation QA pairs that are applicable in most states.
  • Ethical considerations: To maintain transparency and credibility, we cited the original authors of the annotated paragraphs for each QA pair. Please see the annotated example provided above.

For more information on the annotation process, please refer to the accompanying paper.

Who are the annotators?

There were three annotators in total, two with a background in agricultural and environmental topics. Two experts in water and irrigation research hired them and supervised their annotations.

Personal and Sensitive Information

  • Some original paragraphs contained extension educators' names and email addresses, but these have been analyzed accordingly. In other words, they have been replaced with x 's in our dataset.
  • For each paragraph, we referenced the main article, where the context was extracted.

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

  • Version 1.1 is quite small, compared to most QA datasets, and only contains irrigation-related topics, so we suggested not using it in production since, in the real world, agriculture-based questions require temporal and geospatial information, which is not covered yet.
  • We found three paragraphs that contained URLs (links to an Extension YouTube video and a decision support tool). These are outliers and do not necessarily provide implicit answers. They will be removed in version 2.

Other Known Limitations

More Information Needed

Citation Information

BibTeX:

@article{KPODO2024109349,
    title = {AgXQA: A benchmark for advanced Agricultural Extension question answering},
    journal = {Computers and Electronics in Agriculture},
    volume = {225},
    pages = {109349},
    year = {2024},
    issn = {0168-1699},
    doi = {https://doi.org/10.1016/j.compag.2024.109349},
    url = {https://www.sciencedirect.com/science/article/pii/S0168169924007403},
    author = {Josué Kpodo and Parisa Kordjamshidi and A. Pouyan Nejadhashemi},
    keywords = {Agricultural Extension, Question-Answering, Annotated Dataset, Large Language Models, Zero-Shot Learning},
    abstract = {Large language models (LLMs) have revolutionized various scientific fields in the past few years, thanks to their generative and extractive abilities. However, their applications in the Agricultural Extension (AE) domain remain sparse and limited due to the unique challenges of unstructured agricultural data. Furthermore, mainstream LLMs excel at general and open-ended tasks but struggle with domain-specific tasks. We proposed a novel QA benchmark dataset, AgXQA, for the AE domain to address these issues. We trained and evaluated our domain-specific LM, AgRoBERTa, which outperformed other mainstream encoder- and decoder- LMs, on the extractive QA downstream task by achieving an EM score of 55.15% and an F1 score of 78.89%. Besides automated metrics, we also introduced a custom human evaluation metric, AgEES, which confirmed AgRoBERTa’s performance, as demonstrated by a 94.37% agreement rate with expert assessments, compared to 92.62% for GPT 3.5. Notably, we conducted a comprehensive qualitative analysis, whose results provide further insights into the weaknesses and strengths of both domain-specific and general LMs when evaluated on in-domain NLP tasks. Thanks to this novel dataset and specialized LM, our research enhanced further development of specialized LMs for the agriculture domain as a whole and AE in particular, thus fostering sustainable agricultural practices through improved extractive question answering.}
}

APA:

Kpodo, J., Kordjamshidi, P., & Nejadhashemi, A. P. (2024). AgXQA: A benchmark for advanced Agricultural Extension question answering. Computers and Electronics in Agriculture, 225, 109349. https://doi.org/10.1016/J.COMPAG.2024.109349