|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- feature-extraction |
|
size_categories: |
|
- 100M<n<1B |
|
tags: |
|
- geospatial |
|
--- |
|
|
|
# Foursquare OS Places 100M |
|
|
|
Full Foursquare OS Places dump from https://opensource.foursquare.com/os-places/. |
|
This is a single (geo-)parquet file based on the 81 individual parquet files from fused.io on https://source.coop/fused/fsq-os-places/2024-11-19/places. |
|
|
|
As it's just 10Gb, it's fairly easy to handle as a single file and can easily be queried over modern technologies like httpfs. |
|
|
|
## Ways to query the file & visualize the results |
|
If you just want to poke around in the data to get an idea of kind of places to expect, I'd recommend DuckDB. |
|
Huggingface has a DuckDB WASM console integrated but it's too slow (or runs out of memory) when run over the entire file. You can try it [here](https://huggingface.co/datasets/do-me/foursquare_places_100M?sql_console=true&sql=--+The+SQL+console+is+powered+by+DuckDB+WASM+and+runs+entirely+in+the+browser.%0A--+Get+started+by+typing+a+query+or+selecting+a+view+from+the+options+below.%0ASELECT+*+FROM+train+WHERE+name+ILIKE+%27%25bakery%25%27%3B%0A) |
|
|
|
A better way is to run DuckDB locally and query it over httpfs (you do not need to download the whole file but thanks to modern range requests DuckDB just pulls the entries that match the query) as such, e.g. in Python or in the CLI. |
|
Here I use Jupyter and convert the results to a pandas view for displaying convenience. |
|
|
|
### Example 1: Queried over httpfs with DuckDB |
|
|
|
```python |
|
import duckdb |
|
duckdb.sql("INSTALL httpfs;LOAD httpfs;") # required extension |
|
duckdb.sql(f"SELECT * FROM 'hf://datasets/do-me/foursquare_places_100M/foursquare_places.parquet' WHERE name ILIKE '%bakery%' ").df() |
|
``` |
|
|
|
This command takes roughly 70 seconds on my system/network (M3 Max) and yields 104985 entries: |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/LsCYG2HTQmO0t6cJa3Y3R.png) |
|
|
|
### Example 2: Queried fully locally with DuckDB |
|
|
|
This method is much faster, but you need to download the file once. Just download the file directly clicking [this link](https://huggingface.co/datasets/do-me/foursquare_places_100M/resolve/main/foursquare_places.parquet). |
|
You could also use Huggingface datasets library. |
|
|
|
```python |
|
import duckdb |
|
duckdb.sql("SELECT * FROM 'foursquare_places.parquet' WHERE name ILIKE '%bakery%' ").df() |
|
``` |
|
|
|
It yields precisely the same results but takes only 4.5 seconds! |
|
|
|
### Example 3: Queried fully locally with Geopandas |
|
|
|
In case you'd like to do some easy geoprocessing, just stick to Geopandas. It's of course a bit slower and loads everything in memory but gets the job done nicely. |
|
|
|
```python |
|
import geopandas as gpd |
|
gdf = gpd.read_parquet("foursquare_places.parquet") |
|
``` |
|
|
|
Loading the gdf once takes 2 - 3 mins in my case. |
|
|
|
Then you can make use of good old pandas query operators and geospatial tools. |
|
|
|
E.g. looking for bakeries again. |
|
```python |
|
gdf[gdf["name"].str.contains("bakery")] |
|
``` |
|
I was actually surprised that the string operator in pandas is that efficient! It only takes 11 seconds, so fairly fast for 100M rows! |
|
|
|
But to be fair, the actual equivalent to ILIKE in pandas would be this query: |
|
|
|
```python |
|
gdf[gdf["name"].str.contains("bakery", case=False, na=False)] |
|
``` |
|
|
|
It yields exactly the same number of rows like the SQL command but takes 19 seconds, so - as expected - indeed much slower than DuckDB (especially considering that we just count the query time, not the loading time) |
|
|
|
### Example 4: Queried fully locally with Geopandas and visualized with Lonboard |
|
|
|
If you quickly want to visulize the data, Lonboard is a super convenient Jupyter wrapper for deck.gl. |
|
Install it with `pip install lonboard` and you're ready to go. |
|
|
|
```python |
|
import geopandas as gpd |
|
from lonboard import viz |
|
|
|
gdf = gpd.read_parquet("foursquare_places.parquet") |
|
|
|
bakeries = gdf[gdf["name"].str.contains("bakery", case=False, na=False)] |
|
|
|
viz(bakeries) |
|
``` |
|
It created a nice interactive map with tooltips. |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/QIolrK2nlrENnkWlE6TFh.png) |
|
|
|
## Remote Semantic Search |
|
|
|
You can even perform semantic search remotely without downloading the whole file. |
|
Without using any index on the data (like HNSW or ANN etc.) but by simply brute forcing, it takes around 3 mins on my machine to query the example file for Italy remotely. |
|
It weighs ~5Gb and consists of 3.029.191 rows. I used https://huggingface.co/minishlab/M2V_multilingual_output as multilingual embeddings. |
|
I will write a detailed tutorial on the processing in the near future. |
|
|
|
```python |
|
import duckdb |
|
from model2vec import StaticModel |
|
import pandas as pd |
|
import numpy as np |
|
|
|
model = StaticModel.from_pretrained("minishlab/m2v_multilingual_output") |
|
|
|
def search_similar_locations(query_vector, top_k=10, db_path=""): |
|
""" |
|
Search for locations with similar embedding vectors using cosine similarity. |
|
|
|
Args: |
|
query_vector (list): The embedding vector to compare against |
|
top_k (int): Number of similar locations to return |
|
db_path (str): Path to the parquet file containing the embeddings and geometries |
|
|
|
Returns: |
|
pandas.DataFrame: DataFrame containing the top_k most similar locations with their coordinates |
|
""" |
|
|
|
# Convert query vector to numpy array |
|
query_vector = np.array(query_vector).astype(np.float32) |
|
|
|
# Install and load the vss extension |
|
con = duckdb.connect() |
|
try: |
|
#con.execute("INSTALL vss;") |
|
#con.execute("INSTALL spatial;") |
|
con.execute("LOAD vss;") |
|
con.execute("LOAD spatial;") |
|
except Exception as e: |
|
print(f"Error installing/loading vss extension: {str(e)}") |
|
con.close() |
|
return None |
|
|
|
# Define a custom cosine similarity function for use in SQL |
|
def cosine_similarity(arr1, arr2): |
|
if arr1 is None or arr2 is None: |
|
return None |
|
|
|
arr1 = np.array(arr1) |
|
arr2 = np.array(arr2) |
|
|
|
norm_arr1 = np.linalg.norm(arr1) |
|
norm_arr2 = np.linalg.norm(arr2) |
|
|
|
if norm_arr1 == 0 or norm_arr2 == 0: |
|
return 0.0 # To handle zero vectors |
|
|
|
return np.dot(arr1, arr2) / (norm_arr1 * norm_arr2) |
|
|
|
con.create_function('cosine_similarity', cosine_similarity, ['FLOAT[]', 'FLOAT[]'], 'DOUBLE') # Specify parameter types, then return type |
|
|
|
# Construct the SQL query |
|
query = f""" |
|
WITH location_data AS ( |
|
SELECT *, |
|
embeddings::FLOAT[] as embedding_arr -- Cast embedding to FLOAT[] |
|
FROM '{db_path}' |
|
--LIMIT 1_000 |
|
) |
|
SELECT |
|
name, |
|
-- geometry, |
|
ST_X(ST_GeomFromWKB(geometry)) as longitude, |
|
ST_Y(ST_GeomFromWKB(geometry)) as latitude, |
|
cosine_similarity(embedding_arr, ?::FLOAT[]) as cosine_sim |
|
FROM location_data |
|
ORDER BY cosine_sim DESC |
|
LIMIT {top_k}; |
|
""" |
|
|
|
# Execute query and return results as DataFrame |
|
try: |
|
result = con.execute(query, parameters=(query_vector,)).df() # Pass parameters as a tuple |
|
con.close() |
|
return result |
|
except Exception as e: |
|
print(f"Error executing query: {str(e)}") |
|
con.close() |
|
return None |
|
|
|
# Search for similar locations |
|
results = search_similar_locations( |
|
query_vector=model.encode("ski and snowboard"), |
|
top_k=50, |
|
db_path='hf://datasets/do-me/foursquare_places_100M/foursquare_places_italy_embeddings.parquet' # can also be a local file |
|
) |
|
|
|
results |
|
``` |
|
|
|
The resulting pandas df: |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/VkDGOHmSthVJtDeyof9KL.png) |
|
|
|
Note that the cosine similarity function could also be rewritten to run fully in DuckDB. |
|
|