do-me commited on
Commit
5e87643
1 Parent(s): f34b89f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -2
README.md CHANGED
@@ -13,7 +13,7 @@ This is a single (geo-)parquet file based on the 81 individual parquet files fro
13
 
14
  As it's just 10Gb, it's fairly easy to handle as a single file and can easily be queried over modern technologies like httpfs.
15
 
16
- ## Ways to query the file
17
  If you just want to poke around in the data to get an idea of kind of places to expect, I'd recommend DuckDB.
18
  Huggingface has a DuckDB WASM console integrated but it's too slow (or runs out of memory) when run over the entire file. You can try it [here](https://huggingface.co/datasets/do-me/foursquare_places_100M?sql_console=true&sql=--+The+SQL+console+is+powered+by+DuckDB+WASM+and+runs+entirely+in+the+browser.%0A--+Get+started+by+typing+a+query+or+selecting+a+view+from+the+options+below.%0ASELECT+*+FROM+train+WHERE+name+ILIKE+%27%25bakery%25%27%3B%0A)
19
 
@@ -60,10 +60,64 @@ Then you can make use of good old pandas query operators and geospatial tools.
60
  E.g. looking for bakeries again.
61
  ```python
62
  gdf[gdf["name"].str.contains("bakery")]
63
-
64
  ```
65
  I was actually surprised that the string operator in pandas is that efficient! It only takes 11 seconds, so fairly fast for 100M rows!
66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
 
69
 
 
13
 
14
  As it's just 10Gb, it's fairly easy to handle as a single file and can easily be queried over modern technologies like httpfs.
15
 
16
+ ## Ways to query the file & visualize the results
17
  If you just want to poke around in the data to get an idea of kind of places to expect, I'd recommend DuckDB.
18
  Huggingface has a DuckDB WASM console integrated but it's too slow (or runs out of memory) when run over the entire file. You can try it [here](https://huggingface.co/datasets/do-me/foursquare_places_100M?sql_console=true&sql=--+The+SQL+console+is+powered+by+DuckDB+WASM+and+runs+entirely+in+the+browser.%0A--+Get+started+by+typing+a+query+or+selecting+a+view+from+the+options+below.%0ASELECT+*+FROM+train+WHERE+name+ILIKE+%27%25bakery%25%27%3B%0A)
19
 
 
60
  E.g. looking for bakeries again.
61
  ```python
62
  gdf[gdf["name"].str.contains("bakery")]
 
63
  ```
64
  I was actually surprised that the string operator in pandas is that efficient! It only takes 11 seconds, so fairly fast for 100M rows!
65
 
66
+ But to be fair, the actual equivalent to ILIKE in pandas would be this query:
67
+
68
+ ```python
69
+ gdf[gdf["name"].str.contains("bakery", case=False, na=False)]
70
+ ```
71
+
72
+ It yields exactly the same number of rows like the SQL command but takes 19 seconds, so - as expected - indeed much slower than DuckDB (especially considering that we just count the query time, not the loading time)
73
+
74
+ ### Example 4: Queried fully locally with Geopandas and visualized with Lonboard
75
+
76
+ If you quickly want to visulize the data, Lonboard is a super convenient Jupyter wrapper for deck.gl.
77
+ Install it with `pip install lonboard` and you're ready to go.
78
+
79
+ ```python
80
+ import geopandas as gpd
81
+ from lonboard import viz
82
+
83
+ gdf = gpd.read_parquet("foursquare_places.parquet")
84
+
85
+ bakeries = gdf[gdf["name"].str.contains("bakery", case=False, na=False)]
86
+
87
+ viz(bakeries)
88
+ ```
89
+ It created a nice interactive map with tooltips.
90
+
91
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/QIolrK2nlrENnkWlE6TFh.png)
92
+
93
+ If you ask yourself: "Wait, how is it possible that there are so little bakeries in other countries than the US & UK?", just read on!
94
+
95
+ ## Motivation
96
+
97
+ So why would I create this repo and duplicate the data if you can download it directly from AWS or Source Cooperative?
98
+
99
+ It's mainly about convenience: many folks might just want to get an idea of the data and httpfs is perfectly suited for this purpose.
100
+ However the ACTUAL reason why I'm doing this, is that I created a geospatial semantic search workflow for social media data, Overturemaps and other data sources.
101
+ The idea is to use text embeddings and visulize the query similarity on a map. Just play with the apps here to get an idea, it's easier to understand once you tried it:
102
+
103
+ - Geospatial Semantic Search for Instagram Data in Bonn, Germany: https://do-me.github.io/semantic-hexbins/
104
+ - Worldwide Geospatial Semantic Search for Overture Places: https://huggingface.co/datasets/do-me/overture-places
105
+
106
+ In the above example 4, where we are looking for bakeries around the world, it's clear that non-English-speaking countries probably do not necessarily name their bakeries "bakery" but e.g. in German "Bäckerei".
107
+ So if we just search in the name column, we won't find it. That's the reason why Foursquare introduced categories!
108
+
109
+ In the column `fsq_category_labels` we find `[Dining and Drinking > Bakery]` great right? Well, yes and no. Of course we can use it and we will get back some results.
110
+ However, looking a bit closer, we can quickly see why these categories do not seem to work that well:
111
+
112
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/vtDreZy3bdf7UOnQyqWQG.png)
113
+
114
+ Sometimes there are no categories and sometimes entries like `Beef&bakery` probably should have gotten more than one entry.
115
+
116
+ So how can we solve this?
117
+
118
+
119
+
120
+
121
 
122
 
123