cboettig commited on
Commit
5107814
·
1 Parent(s): e6fc859

notes :pen:

Browse files
Files changed (2) hide show
  1. README.md +28 -0
  2. preprocess.md +27 -0
README.md CHANGED
@@ -13,3 +13,31 @@ license: bsd-2-clause
13
  :hugs: Shiny App on Huggingface: <https://huggingface.co/spaces/boettiger-lab/geo-llm-r>
14
 
15
  Work in progress. This is a proof-of-principle for an LLM-driven interface to dynamic mapping. Key technologies include duckdb, geoparquet, pmtiles, maplibre, open LLMs (via VLLM + LiteLLM). R interface through ellmer (LLMs), mapgl (maplibre), shiny, and duckdb.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  :hugs: Shiny App on Huggingface: <https://huggingface.co/spaces/boettiger-lab/geo-llm-r>
14
 
15
  Work in progress. This is a proof-of-principle for an LLM-driven interface to dynamic mapping. Key technologies include duckdb, geoparquet, pmtiles, maplibre, open LLMs (via VLLM + LiteLLM). R interface through ellmer (LLMs), mapgl (maplibre), shiny, and duckdb.
16
+
17
+ # Setup
18
+
19
+ ## GitHub with HuggingFace Deploy
20
+
21
+ All edits should be pushed to GitHub. Edits to `main` branch are automatically deployed to HuggingFace via GitHub Actions.
22
+ When using this scaffold, you will first have to set up your auto-deploy system:
23
+
24
+ - [Create a new HuggingFace Space](https://huggingface.co/new-space) (any template is fine, will be overwritten).
25
+ - [Create a HuggingFace Token](https://huggingface.co/settings/tokens/new?tokenType=write) with write permissions if you do not have one.
26
+ - In the GitHub Settings of your repository, add the token as a "New Repository Secret" under the `Secrets and Variables` -> `Actions` section of settings (`https://github.com/{USER}/{REPO}/settings/secrets/actions`).
27
+ - Edit the `.github/workflows/deploy.yml` file to specify your HuggingFace user name and HF repo to publish to.
28
+
29
+ ## Language Model setup
30
+
31
+ This example is designed to be able to leverage open source or open weights models. You will need to adjust the API URL and API key accordingly. This could be a local model with `vllm` or `ollama`, and of course commercial models should work too. The demo app currently runs on an VLLM+LiteLLM backed model, currently a Llama3 variant, hosted on the National Research Platform.
32
+
33
+ The LLM plays only a simple role in generating SQL queries from background information on the data including the table schema, see the system prompt for details. Most open models I have experimented with do not support the [tool use](https://ellmer.tidyverse.org/articles/tool-calling.html) or [structured data](https://ellmer.tidyverse.org/articles/structured-data.html) interfaces very well compared to commercial models. An important trick in working with open models used here is merely requesting the reply be structured as JSON. Open models are quite decent at this, and at SQL construction, given necessary context about the data. The map and chart elements merely react the resulting data frames, and the entire analysis is thus transparent and reproducible as it would be if the user had composed their request in SQL instead of plain English.
34
+
35
+ ## Software Dependencies
36
+
37
+ The Dockerfile includes all dependencies required for the HuggingFace deployment, and can be used as a template or directly to serve RStudio server.
38
+
39
+ ## Data pre-processing
40
+
41
+ Pre-processing the data into cloud-native formats and hosting data on a high bandwidth, highly avalialbe server is essential for efficient and scalable renending. Pre-computing expensive operations such as zonal statistics across all features is also necessary. These steps are described in [preprocess.md](preprocess.md) and corresponding scripts.
42
+
43
+
preprocess.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+
5
+ # Vector Layers
6
+
7
+ The heart of this application design is a vector dataset serialized as both (Geo)Parquet and PMTiles.
8
+ The parquet version allows for real-time calculations through rapid SQL queries via duckdb,
9
+ and the PMTiles version allows the data to be quickly visualized at any zoom through maplibre.
10
+ maplibre can also efficiently filter the PMTiles data given a feature ids returned by duckdb.
11
+
12
+ `gdal_translates` can generate both PMTiles and geoparquet, though `tippecanoe` provides more
13
+ options for PMTiles generation and can produce nicer tile sets.
14
+
15
+ The demo uses the CDC Social Vulnerability data because it is built on the hierachical partitioning
16
+ used by the Census (Country->State->County->Tract) hierarchy.
17
+
18
+ # Raster Layers
19
+
20
+ ## Generating static tiles
21
+
22
+ ## Zonal statistics calculations
23
+
24
+ The application is essentially driven by the vector layer data using SQL.
25
+ I find it helpful to pre-process 'zonal' calculations, e.g. the mean value of each raster layer
26
+ within each feature in the 'focal' vector data set(s).
27
+