Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,69 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
task_categories:
|
4 |
+
- feature-extraction
|
5 |
+
size_categories:
|
6 |
+
- 100M<n<1B
|
7 |
+
---
|
8 |
+
|
9 |
+
# Foursqure OS Places 100M
|
10 |
+
|
11 |
+
Full Foursquare OS Places dump from https://opensource.foursquare.com/os-places/.
|
12 |
+
This is a single (geo-)parquet file based on the 81 individual parquet files from fused.io on https://source.coop/fused/fsq-os-places/2024-11-19/places.
|
13 |
+
|
14 |
+
As it's just 10Gb, it's fairly easy to handle as a single file and can easily be queried over modern technologies like httpfs.
|
15 |
+
|
16 |
+
## Ways to query the file
|
17 |
+
If you just want to poke around in the data to get an idea of kind of places to expect, I'd recommend DuckDB.
|
18 |
+
Huggingface has a DuckDB WASM console integrated but it's too slow (or runs out of memory) when run over the entire file. You can try it [here](https://huggingface.co/datasets/do-me/foursquare_places_100M?sql_console=true&sql=--+The+SQL+console+is+powered+by+DuckDB+WASM+and+runs+entirely+in+the+browser.%0A--+Get+started+by+typing+a+query+or+selecting+a+view+from+the+options+below.%0ASELECT+*+FROM+train+WHERE+name+ILIKE+%27%25bakery%25%27%3B%0A)
|
19 |
+
|
20 |
+
A better way is to run DuckDB locally and query it over httpfs (you do not need to download the whole file but thanks to modern range requests DuckDB just pulls the entries that match the query) as such, e.g. in Python or in the CLI.
|
21 |
+
Here I use Jupyter and convert the results to a pandas view for displaying convenience.
|
22 |
+
|
23 |
+
### Example 1: Queried over httpfs with DuckDB
|
24 |
+
|
25 |
+
```python
|
26 |
+
import duckdb
|
27 |
+
duckdb.sql("INSTALL httpfs;LOAD httpfs;") # required extension
|
28 |
+
duckdb.sql(f"SELECT * FROM 'hf://datasets/do-me/foursquare_places_100M/foursquare_places.parquet' WHERE name ILIKE '%bakery%' ").df()
|
29 |
+
```
|
30 |
+
|
31 |
+
This command takes roughly 70 seconds on my system/network (M3 Max) and yields 104985 entries:
|
32 |
+
|
33 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/LsCYG2HTQmO0t6cJa3Y3R.png)
|
34 |
+
|
35 |
+
### Example 2: Queried fully locally with DuckDB
|
36 |
+
|
37 |
+
This method is much faster, but you need to download the file once. Just download the file directly clicking [this link](https://huggingface.co/datasets/do-me/foursquare_places_100M/resolve/main/foursquare_places.parquet).
|
38 |
+
You could also use Huggingface datasets library.
|
39 |
+
|
40 |
+
```python
|
41 |
+
import duckdb
|
42 |
+
duckdb.sql("SELECT * FROM 'foursquare_places.parquet' WHERE name ILIKE '%bakery%' ").df()
|
43 |
+
```
|
44 |
+
|
45 |
+
It yields precisely the same results but takes only 4.5 seconds!
|
46 |
+
|
47 |
+
### Example 3: Queried fully locally with Geopandas
|
48 |
+
|
49 |
+
In case you'd like to do some easy geoprocessing, just stick to Geopandas. It's of course a bit slower and loads everything in memory but gets the job done nicely.
|
50 |
+
|
51 |
+
```python
|
52 |
+
import geopandas as gpd
|
53 |
+
gdf = gpd.read_parquet("foursquare_places.parquet")
|
54 |
+
```
|
55 |
+
|
56 |
+
Loading the gdf once takes 2 - 3 mins in my case.
|
57 |
+
|
58 |
+
Then you can make use of good old pandas query operators and geospatial tools.
|
59 |
+
|
60 |
+
E.g. looking for bakeries again.
|
61 |
+
```python
|
62 |
+
gdf[gdf["name"].str.contains("bakery")]
|
63 |
+
|
64 |
+
```
|
65 |
+
I was actually surprised that the string operator in pandas is that efficient! It only takes 11 seconds, so fairly fast for 100M rows!
|
66 |
+
|
67 |
+
|
68 |
+
|
69 |
+
|