Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
dd55375
·
verified ·
1 Parent(s): 0829d50

Upload dataset

Browse files
CC-MAIN-2015-27/train-00000-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88cd57753689cee4ff9418e970b7869ebe51ebfb2d6342d99220848ae4d181fc
3
+ size 365814730
CC-MAIN-2015-27/train-00001-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fff49108e8c24176dcf81ac3be652bd0d3ae50708447bff10cad4e65d69d1e38
3
+ size 364951436
CC-MAIN-2015-27/train-00002-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d5c42b6219c5da12eee024b6ad8ab49126500029c66df48a5f48f4f632d3e0a
3
+ size 365479218
CC-MAIN-2015-27/train-00003-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1224ea5af4b5f2a3a0979edb7e3bd5ea11597efcef6c690efa1fdc2cf683f0c6
3
+ size 365889903
CC-MAIN-2015-27/train-00004-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0b4b63a2d1bc5c4ac01670a545ca46bcf2fe4865d851bac80703409fbb4d141
3
+ size 366439939
README.md CHANGED
@@ -788,6 +788,58 @@ dataset_info:
788
  num_examples: 1290530
789
  download_size: 2913627974
790
  dataset_size: 7008276790
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
791
  configs:
792
  - config_name: CC-MAIN-2013-20
793
  data_files:
@@ -849,6 +901,10 @@ configs:
849
  data_files:
850
  - split: train
851
  path: CC-MAIN-2015-22/train-*
 
 
 
 
852
  ---
853
 
854
  We are uploading the dataset files ~
 
788
  num_examples: 1290530
789
  download_size: 2913627974
790
  dataset_size: 7008276790
791
+ - config_name: CC-MAIN-2015-27
792
+ features:
793
+ - name: general_metadata
794
+ struct:
795
+ - name: domain
796
+ sequence: string
797
+ - name: fluency_prob
798
+ dtype: float64
799
+ - name: id
800
+ dtype: string
801
+ - name: non_advertisement_prob
802
+ dtype: float64
803
+ - name: politics_prob
804
+ dtype: float64
805
+ - name: porn_prob
806
+ dtype: float64
807
+ - name: toxic_prob
808
+ dtype: float64
809
+ - name: url
810
+ dtype: string
811
+ - name: images
812
+ sequence: string
813
+ - name: texts
814
+ sequence: string
815
+ - name: metadata
816
+ list:
817
+ - name: aesthetic_prob
818
+ dtype: float64
819
+ - name: bytes
820
+ dtype: int64
821
+ - name: d_hash
822
+ dtype: string
823
+ - name: d_hash_dup_count
824
+ dtype: int64
825
+ - name: height
826
+ dtype: int64
827
+ - name: img_url_sha
828
+ dtype: string
829
+ - name: p_hash
830
+ dtype: string
831
+ - name: p_hash_dup_count
832
+ dtype: int64
833
+ - name: unsafe_prob
834
+ dtype: float64
835
+ - name: width
836
+ dtype: int64
837
+ splits:
838
+ - name: train
839
+ num_bytes: 4320140953
840
+ num_examples: 784496
841
+ download_size: 1828575226
842
+ dataset_size: 4320140953
843
  configs:
844
  - config_name: CC-MAIN-2013-20
845
  data_files:
 
901
  data_files:
902
  - split: train
903
  path: CC-MAIN-2015-22/train-*
904
+ - config_name: CC-MAIN-2015-27
905
+ data_files:
906
+ - split: train
907
+ path: CC-MAIN-2015-27/train-*
908
  ---
909
 
910
  We are uploading the dataset files ~