Fraser Greenlee commited on
Commit
08411d3
·
1 Parent(s): 7b42a60
Files changed (1) hide show
  1. short-jokes.py +62 -0
short-jokes.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """News headlines and categories dataset."""
2
+
3
+ from __future__ import absolute_import, division, print_function
4
+
5
+ import datasets
6
+
7
+
8
+ _DESCRIPTION = """\
9
+ Copy of [Kaggle dataset](https://www.kaggle.com/abhinavmoudgil95/short-jokes), adding to Huggingface for ease of use.
10
+
11
+ Description from Kaggle:
12
+
13
+ Context
14
+
15
+ Generating humor is a complex task in the domain of machine learning, and it requires the models to understand the deep semantic meaning of a joke in order to generate new ones. Such problems, however, are difficult to solve due to a number of reasons, one of which is the lack of a database that gives an elaborate list of jokes. Thus, a large corpus of over 0.2 million jokes has been collected by scraping several websites containing funny and short jokes.
16
+
17
+ Visit my Github repository for more information regarding collection of data and the scripts used.
18
+
19
+ Content
20
+
21
+ This dataset is in the form of a csv file containing 231,657 jokes. Length of jokes ranges from 10 to 200 characters. Each line in the file contains a unique ID and joke.
22
+
23
+ Disclaimer
24
+
25
+ It has been attempted to keep the jokes as clean as possible. Since the data has been collected by scraping websites, it is possible that there may be a few jokes that are inappropriate or offensive to some people.
26
+ """
27
+
28
+ _CITATION = None
29
+
30
+ _TRAIN_DOWNLOAD_URL = "https://raw.githubusercontent.com/Fraser-Greenlee/my-huggingface-datasets/master/data/short-jokes/train.json"
31
+
32
+
33
+ class ShortJokes(datasets.GeneratorBasedBuilder):
34
+ """Short jokes dataset."""
35
+
36
+ def _info(self):
37
+ return datasets.DatasetInfo(
38
+ description=_DESCRIPTION,
39
+ features=datasets.Features(
40
+ {
41
+ 'text': datasets.Value("string"),
42
+ }
43
+ ),
44
+ homepage="https://github.com/Fraser-Greenlee/my-huggingface-datasets",
45
+ citation=_CITATION,
46
+ )
47
+
48
+ def _split_generators(self, dl_manager):
49
+ train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
50
+ return [
51
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
52
+ ]
53
+
54
+ def _generate_examples(self, filepath):
55
+ """Generate examples."""
56
+ with open(filepath, encoding="utf-8") as txt_lines_file:
57
+ data = []
58
+ for line in txt_lines_file:
59
+ data.append({'text': line})
60
+
61
+ for id_, row in enumerate(data):
62
+ yield id_, row