File size: 3,181 Bytes
d013a20 6a2af29 d013a20 6a2af29 d2f2e31 6a2af29 d2f2e31 6a2af29 500e8e5 6a2af29 d013a20 1368c8e d013a20 55e0e68 d013a20 55e0e68 d013a20 1368c8e 7e94369 d013a20 f4540c3 d013a20 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
language:
- ar
size_categories:
- 1K<n<10K
task_categories:
- conversational
- text-generation
- text-classification
tags:
- mental health
dataset_info:
features:
- name: content
dtype: string
- name: text_size
dtype: int64
- name: topic
dtype: string
- name: prob
dtype: float64
splits:
- name: train
num_bytes: 6007437.514440433
num_examples: 1884
download_size: 2896563
dataset_size: 6007437.514440433
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Nafsy
<!-- Provide a quick summary of the dataset. -->
This arabic dataset is a set of mental health articles. The original dataset was scrapped from [Nafsy.net](https://nafsy.net/).
## Dataset Details
**Language(s) (NLP):** Arabic
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
Fine-tuning llm in the mental health domain.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
It is a CSV file with columns:
- content: the articles
- text_size: length of article
- topic: top 10 words that describe the topics of the article
- prob: topic prediction accuracy
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
Creating an arabic chatbot for mental health support.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
- This dataset was originally scrapped from [Nafsy.net](https://nafsy.net/) then uploaded to Kaggle.
- An additional preprocessing was made by this repo owner:
- Cleaning data: removing urls, extra spaces, and non words, detach punctuations, and dropping duplicates
- Applying Topic Modeling to generate main topics for each article using bert-base-arabic model
- Deduplicating data using sentence-transformers (paraphrase-multilingual-MiniLM-L12-v2)
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[husamal](https://www.kaggle.com/husamal)
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{Husamal_2021, title={Arabic-physcology-dataset}, url={https://www.kaggle.com/datasets/husamal/arabicphyscologydataset?select=nafsy.csv}, journal={Kaggle}, author={Husamal}, year={2021}, month={May}}
## Dataset Card Authors
Muhammad Helmy
## Dataset Card Contact
[email protected] |