danielrosehill commited on
Commit
8afcf40
ยท
1 Parent(s): 040c017
Files changed (1) hide show
  1. notebooks/data_exploration.ipynb +513 -0
notebooks/data_exploration.ipynb ADDED
@@ -0,0 +1,513 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# IFVI Value Factors Data Exploration\n",
8
+ "\n",
9
+ "This notebook provides a starting point for exploring and visualizing the IFVI Value Factors dataset. The dataset contains environmental value factors organized by region, impact type, and policy domain.\n",
10
+ "\n",
11
+ "## Dataset Overview\n",
12
+ "\n",
13
+ "The IFVI Value Factors dataset is organized in multiple ways:\n",
14
+ "- By region (continental regions, economic zones, development status)\n",
15
+ "- By impact type (health, ecosystem, economic, social impacts)\n",
16
+ "- By policy domain (climate, air quality, land use, waste management, water resources)\n",
17
+ "\n",
18
+ "Data is available in multiple formats: JSON, CSV, and Parquet."
19
+ ]
20
+ },
21
+ {
22
+ "cell_type": "markdown",
23
+ "metadata": {},
24
+ "source": [
25
+ "## Setup and Dependencies\n",
26
+ "\n",
27
+ "First, let's import the necessary libraries for data exploration and visualization."
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "code",
32
+ "execution_count": null,
33
+ "metadata": {},
34
+ "outputs": [],
35
+ "source": [
36
+ "# Import standard data analysis libraries\n",
37
+ "import pandas as pd\n",
38
+ "import numpy as np\n",
39
+ "import matplotlib.pyplot as plt\n",
40
+ "import seaborn as sns\n",
41
+ "import json\n",
42
+ "import os\n",
43
+ "from pathlib import Path\n",
44
+ "import glob\n",
45
+ "\n",
46
+ "# Set up plotting\n",
47
+ "plt.style.use('ggplot')\n",
48
+ "sns.set(style=\"whitegrid\")\n",
49
+ "%matplotlib inline\n",
50
+ "plt.rcParams['figure.figsize'] = (12, 8)\n",
51
+ "\n",
52
+ "# Display settings for pandas\n",
53
+ "pd.set_option('display.max_columns', None)\n",
54
+ "pd.set_option('display.max_rows', 100)\n",
55
+ "pd.set_option('display.width', 1000)"
56
+ ]
57
+ },
58
+ {
59
+ "cell_type": "markdown",
60
+ "metadata": {},
61
+ "source": [
62
+ "## Data Loading Functions\n",
63
+ "\n",
64
+ "Let's define some helper functions to load data from different sources and formats."
65
+ ]
66
+ },
67
+ {
68
+ "cell_type": "code",
69
+ "execution_count": null,
70
+ "metadata": {},
71
+ "outputs": [],
72
+ "source": [
73
+ "# Define the base data directory\n",
74
+ "DATA_DIR = Path('../data')\n",
75
+ "\n",
76
+ "def load_json_file(file_path):\n",
77
+ " \"\"\"Load a JSON file into a Python dictionary.\"\"\"\n",
78
+ " try:\n",
79
+ " with open(file_path, 'r') as f:\n",
80
+ " return json.load(f)\n",
81
+ " except Exception as e:\n",
82
+ " print(f\"Error loading {file_path}: {e}\")\n",
83
+ " return None\n",
84
+ "\n",
85
+ "def load_csv_file(file_path):\n",
86
+ " \"\"\"Load a CSV file into a pandas DataFrame.\"\"\"\n",
87
+ " try:\n",
88
+ " return pd.read_csv(file_path)\n",
89
+ " except Exception as e:\n",
90
+ " print(f\"Error loading {file_path}: {e}\")\n",
91
+ " return None\n",
92
+ " \n",
93
+ "def load_parquet_file(file_path):\n",
94
+ " \"\"\"Load a Parquet file into a pandas DataFrame.\"\"\"\n",
95
+ " try:\n",
96
+ " return pd.read_parquet(file_path)\n",
97
+ " except Exception as e:\n",
98
+ " print(f\"Error loading {file_path}: {e}\")\n",
99
+ " return None\n",
100
+ " \n",
101
+ "def get_available_regions():\n",
102
+ " \"\"\"Get a list of available regions in the dataset.\"\"\"\n",
103
+ " continental_dir = DATA_DIR / 'by-region' / 'continental'\n",
104
+ " if continental_dir.exists():\n",
105
+ " return [d.name for d in continental_dir.iterdir() if d.is_dir()]\n",
106
+ " return []\n",
107
+ "\n",
108
+ "def get_available_impact_types():\n",
109
+ " \"\"\"Get a list of available impact types in the dataset.\"\"\"\n",
110
+ " impact_dir = DATA_DIR / 'by-impact-type'\n",
111
+ " if impact_dir.exists():\n",
112
+ " return [d.name for d in impact_dir.iterdir() if d.is_dir()]\n",
113
+ " return []"
114
+ ]
115
+ },
116
+ {
117
+ "cell_type": "markdown",
118
+ "metadata": {},
119
+ "source": [
120
+ "## Exploring the Dataset Structure\n",
121
+ "\n",
122
+ "Let's explore the structure of the dataset to understand what's available."
123
+ ]
124
+ },
125
+ {
126
+ "cell_type": "code",
127
+ "execution_count": null,
128
+ "metadata": {},
129
+ "outputs": [],
130
+ "source": [
131
+ "# Check available regions\n",
132
+ "regions = get_available_regions()\n",
133
+ "print(f\"Available regions: {regions}\")\n",
134
+ "\n",
135
+ "# Check available impact types\n",
136
+ "impact_types = get_available_impact_types()\n",
137
+ "print(f\"Available impact types: {impact_types}\")"
138
+ ]
139
+ },
140
+ {
141
+ "cell_type": "markdown",
142
+ "metadata": {},
143
+ "source": [
144
+ "## Loading Aggregated Data\n",
145
+ "\n",
146
+ "Let's start by loading some of the aggregated data files to get an overview of the dataset."
147
+ ]
148
+ },
149
+ {
150
+ "cell_type": "code",
151
+ "execution_count": null,
152
+ "metadata": {},
153
+ "outputs": [],
154
+ "source": [
155
+ "# Load composite value factors from CSV\n",
156
+ "composite_csv_path = DATA_DIR / 'aggregated' / 'composite_value_factors.csv'\n",
157
+ "if composite_csv_path.exists():\n",
158
+ " composite_df = load_csv_file(composite_csv_path)\n",
159
+ " if composite_df is not None:\n",
160
+ " print(\"Composite Value Factors (CSV):\")\n",
161
+ " display(composite_df.head())\n",
162
+ "else:\n",
163
+ " print(f\"File not found: {composite_csv_path}\")\n",
164
+ " \n",
165
+ "# Try loading some CSV files from the csv directory\n",
166
+ "csv_files = list(Path(DATA_DIR / 'csv' / 'by-methodology').glob('*.csv'))\n",
167
+ "if csv_files:\n",
168
+ " print(f\"\\nFound {len(csv_files)} CSV files in by-methodology directory\")\n",
169
+ " for file_path in csv_files[:3]: # Show first 3 files\n",
170
+ " print(f\"\\nLoading {file_path.name}:\")\n",
171
+ " df = load_csv_file(file_path)\n",
172
+ " if df is not None:\n",
173
+ " display(df.head())"
174
+ ]
175
+ },
176
+ {
177
+ "cell_type": "markdown",
178
+ "metadata": {},
179
+ "source": [
180
+ "## Exploring Regional Data\n",
181
+ "\n",
182
+ "Let's explore the data for specific regions."
183
+ ]
184
+ },
185
+ {
186
+ "cell_type": "code",
187
+ "execution_count": null,
188
+ "metadata": {},
189
+ "outputs": [],
190
+ "source": [
191
+ "# Function to get countries in a region\n",
192
+ "def get_countries_in_region(region):\n",
193
+ " region_dir = DATA_DIR / 'by-region' / 'continental' / region\n",
194
+ " if region_dir.exists():\n",
195
+ " return [f.stem for f in region_dir.glob('*.json')]\n",
196
+ " return []\n",
197
+ "\n",
198
+ "# Check countries in each region\n",
199
+ "for region in regions:\n",
200
+ " countries = get_countries_in_region(region)\n",
201
+ " print(f\"{region}: {len(countries)} countries\")\n",
202
+ " if countries:\n",
203
+ " print(f\" Sample countries: {', '.join(countries[:5])}...\")"
204
+ ]
205
+ },
206
+ {
207
+ "cell_type": "markdown",
208
+ "metadata": {},
209
+ "source": [
210
+ "## Loading Country-Specific Data\n",
211
+ "\n",
212
+ "Let's load data for a specific country and explore it."
213
+ ]
214
+ },
215
+ {
216
+ "cell_type": "code",
217
+ "execution_count": null,
218
+ "metadata": {},
219
+ "outputs": [],
220
+ "source": [
221
+ "# Choose a sample country (adjust as needed)\n",
222
+ "sample_region = regions[0] if regions else None\n",
223
+ "if sample_region:\n",
224
+ " countries = get_countries_in_region(sample_region)\n",
225
+ " sample_country = countries[0] if countries else None\n",
226
+ " \n",
227
+ " if sample_country:\n",
228
+ " country_file = DATA_DIR / 'by-region' / 'continental' / sample_region / f\"{sample_country}.json\"\n",
229
+ " print(f\"Loading data for {sample_country} in {sample_region}\")\n",
230
+ " country_data = load_json_file(country_file)\n",
231
+ " \n",
232
+ " if country_data:\n",
233
+ " # Display basic information about the country data\n",
234
+ " print(f\"\\nData keys: {list(country_data.keys()) if isinstance(country_data, dict) else 'Not a dictionary'}\")\n",
235
+ " \n",
236
+ " # Convert to DataFrame if possible for easier exploration\n",
237
+ " if isinstance(country_data, dict):\n",
238
+ " try:\n",
239
+ " # This is a placeholder - adjust based on actual data structure\n",
240
+ " country_df = pd.json_normalize(country_data)\n",
241
+ " display(country_df.head())\n",
242
+ " except Exception as e:\n",
243
+ " print(f\"Could not convert to DataFrame: {e}\")"
244
+ ]
245
+ },
246
+ {
247
+ "cell_type": "markdown",
248
+ "metadata": {},
249
+ "source": [
250
+ "## Exploring Impact Type Data\n",
251
+ "\n",
252
+ "Let's explore data organized by impact type."
253
+ ]
254
+ },
255
+ {
256
+ "cell_type": "code",
257
+ "execution_count": null,
258
+ "metadata": {},
259
+ "outputs": [],
260
+ "source": [
261
+ "# Function to get files in an impact type directory\n",
262
+ "def get_files_in_impact_type(impact_type):\n",
263
+ " impact_dir = DATA_DIR / 'by-impact-type' / impact_type\n",
264
+ " if impact_dir.exists():\n",
265
+ " return list(impact_dir.glob('**/*.*'))\n",
266
+ " return []\n",
267
+ "\n",
268
+ "# Check files in each impact type\n",
269
+ "for impact_type in impact_types:\n",
270
+ " files = get_files_in_impact_type(impact_type)\n",
271
+ " print(f\"{impact_type}: {len(files)} files\")\n",
272
+ " if files:\n",
273
+ " print(f\" Sample files: {', '.join(str(f.relative_to(DATA_DIR / 'by-impact-type')) for f in files[:3])}...\")"
274
+ ]
275
+ },
276
+ {
277
+ "cell_type": "markdown",
278
+ "metadata": {},
279
+ "source": [
280
+ "## Data Visualization Examples\n",
281
+ "\n",
282
+ "Let's create some example visualizations based on the available data."
283
+ ]
284
+ },
285
+ {
286
+ "cell_type": "code",
287
+ "execution_count": null,
288
+ "metadata": {},
289
+ "outputs": [],
290
+ "source": [
291
+ "# Example 1: Bar chart comparing values across countries (placeholder)\n",
292
+ "# Replace with actual data loading and processing based on your exploration\n",
293
+ "def plot_country_comparison(region, value_factor, n_countries=10):\n",
294
+ " \"\"\"Plot a comparison of a specific value factor across countries in a region.\"\"\"\n",
295
+ " # This is a placeholder - replace with actual data loading logic\n",
296
+ " countries = get_countries_in_region(region)[:n_countries]\n",
297
+ " \n",
298
+ " # Placeholder for data - replace with actual data\n",
299
+ " np.random.seed(42) # For reproducibility\n",
300
+ " values = np.random.rand(len(countries)) * 100\n",
301
+ " \n",
302
+ " plt.figure(figsize=(12, 8))\n",
303
+ " plt.bar(countries, values)\n",
304
+ " plt.title(f\"{value_factor} Comparison Across {region} Countries\")\n",
305
+ " plt.xlabel(\"Country\")\n",
306
+ " plt.ylabel(f\"{value_factor} Value\")\n",
307
+ " plt.xticks(rotation=45, ha='right')\n",
308
+ " plt.tight_layout()\n",
309
+ " plt.show()\n",
310
+ "\n",
311
+ "# Example visualization with placeholder data\n",
312
+ "if regions:\n",
313
+ " plot_country_comparison(regions[0], \"CO2 Emissions Value Factor\")"
314
+ ]
315
+ },
316
+ {
317
+ "cell_type": "code",
318
+ "execution_count": null,
319
+ "metadata": {},
320
+ "outputs": [],
321
+ "source": [
322
+ "# Example 2: Heatmap of correlations between different value factors (placeholder)\n",
323
+ "# Replace with actual data loading and processing\n",
324
+ "\n",
325
+ "# Placeholder for correlation data - replace with actual data\n",
326
+ "np.random.seed(42) # For reproducibility\n",
327
+ "factor_names = ['CO2', 'PM2.5', 'NOx', 'SOx', 'Land Use', 'Water Consumption']\n",
328
+ "corr_matrix = np.random.rand(len(factor_names), len(factor_names))\n",
329
+ "# Make it symmetric for a valid correlation matrix\n",
330
+ "corr_matrix = (corr_matrix + corr_matrix.T) / 2\n",
331
+ "np.fill_diagonal(corr_matrix, 1)\n",
332
+ "\n",
333
+ "# Create a DataFrame\n",
334
+ "corr_df = pd.DataFrame(corr_matrix, index=factor_names, columns=factor_names)\n",
335
+ "\n",
336
+ "# Plot the heatmap\n",
337
+ "plt.figure(figsize=(10, 8))\n",
338
+ "sns.heatmap(corr_df, annot=True, cmap='coolwarm', vmin=-1, vmax=1)\n",
339
+ "plt.title('Correlation Between Different Value Factors (Example)')\n",
340
+ "plt.tight_layout()\n",
341
+ "plt.show()"
342
+ ]
343
+ },
344
+ {
345
+ "cell_type": "markdown",
346
+ "metadata": {},
347
+ "source": [
348
+ "## Geographic Visualization\n",
349
+ "\n",
350
+ "Let's create a map visualization to show value factors across different regions."
351
+ ]
352
+ },
353
+ {
354
+ "cell_type": "code",
355
+ "execution_count": null,
356
+ "metadata": {},
357
+ "outputs": [],
358
+ "source": [
359
+ "# For geographic visualizations, we'll need additional libraries\n",
360
+ "# Uncomment and run if needed\n",
361
+ "# !pip install geopandas matplotlib\n",
362
+ "\n",
363
+ "# Example code for geographic visualization\n",
364
+ "try:\n",
365
+ " import geopandas as gpd\n",
366
+ " \n",
367
+ " # Load world map data\n",
368
+ " world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))\n",
369
+ " \n",
370
+ " # Placeholder for value factor data - replace with actual data\n",
371
+ " # Here we're just assigning random values to countries\n",
372
+ " np.random.seed(42) # For reproducibility\n",
373
+ " world['value_factor'] = np.random.rand(len(world)) * 100\n",
374
+ " \n",
375
+ " # Create the plot\n",
376
+ " fig, ax = plt.subplots(1, 1, figsize=(15, 10))\n",
377
+ " world.plot(column='value_factor', ax=ax, legend=True,\n",
378
+ " legend_kwds={'label': \"Value Factor (Example)\",\n",
379
+ " 'orientation': \"horizontal\"},\n",
380
+ " cmap='YlOrRd')\n",
381
+ " ax.set_title('Global Distribution of Value Factors (Example)', fontsize=15)\n",
382
+ " plt.tight_layout()\n",
383
+ " plt.show()\n",
384
+ " \n",
385
+ "except ImportError:\n",
386
+ " print(\"To create geographic visualizations, install geopandas:\")\n",
387
+ " print(\"pip install geopandas matplotlib\")"
388
+ ]
389
+ },
390
+ {
391
+ "cell_type": "markdown",
392
+ "metadata": {},
393
+ "source": [
394
+ "## Time Series Analysis\n",
395
+ "\n",
396
+ "If the data contains time series information, we can analyze trends over time."
397
+ ]
398
+ },
399
+ {
400
+ "cell_type": "code",
401
+ "execution_count": null,
402
+ "metadata": {},
403
+ "outputs": [],
404
+ "source": [
405
+ "# Placeholder for time series data - replace with actual data if available\n",
406
+ "# Generate example time series data\n",
407
+ "dates = pd.date_range(start='2015-01-01', end='2024-01-01', freq='M')\n",
408
+ "np.random.seed(42) # For reproducibility\n",
409
+ "values = np.cumsum(np.random.randn(len(dates))) + 50 # Random walk with drift\n",
410
+ "\n",
411
+ "# Create a DataFrame\n",
412
+ "ts_df = pd.DataFrame({'Date': dates, 'Value Factor': values})\n",
413
+ "\n",
414
+ "# Plot the time series\n",
415
+ "plt.figure(figsize=(14, 7))\n",
416
+ "plt.plot(ts_df['Date'], ts_df['Value Factor'])\n",
417
+ "plt.title('Value Factor Trend Over Time (Example)', fontsize=15)\n",
418
+ "plt.xlabel('Date')\n",
419
+ "plt.ylabel('Value Factor')\n",
420
+ "plt.grid(True)\n",
421
+ "plt.tight_layout()\n",
422
+ "plt.show()"
423
+ ]
424
+ },
425
+ {
426
+ "cell_type": "markdown",
427
+ "metadata": {},
428
+ "source": [
429
+ "## Comparative Analysis\n",
430
+ "\n",
431
+ "Let's compare value factors across different categories or regions."
432
+ ]
433
+ },
434
+ {
435
+ "cell_type": "code",
436
+ "execution_count": null,
437
+ "metadata": {},
438
+ "outputs": [],
439
+ "source": [
440
+ "# Placeholder for comparative data - replace with actual data\n",
441
+ "categories = ['Health Impacts', 'Ecosystem Impacts', 'Economic Impacts', 'Social Impacts']\n",
442
+ "regions_sample = ['Africa', 'Asia', 'Europe', 'North America', 'South America']\n",
443
+ "\n",
444
+ "# Generate random data for demonstration\n",
445
+ "np.random.seed(42) # For reproducibility\n",
446
+ "data = np.random.rand(len(regions_sample), len(categories)) * 100\n",
447
+ "\n",
448
+ "# Create a DataFrame\n",
449
+ "comp_df = pd.DataFrame(data, index=regions_sample, columns=categories)\n",
450
+ "\n",
451
+ "# Plot the data\n",
452
+ "comp_df.plot(kind='bar', figsize=(14, 8))\n",
453
+ "plt.title('Value Factors by Impact Type Across Regions (Example)', fontsize=15)\n",
454
+ "plt.xlabel('Region')\n",
455
+ "plt.ylabel('Value Factor')\n",
456
+ "plt.legend(title='Impact Type')\n",
457
+ "plt.grid(axis='y')\n",
458
+ "plt.tight_layout()\n",
459
+ "plt.show()"
460
+ ]
461
+ },
462
+ {
463
+ "cell_type": "markdown",
464
+ "metadata": {},
465
+ "source": [
466
+ "## Next Steps\n",
467
+ "\n",
468
+ "Based on this initial exploration, here are some suggested next steps:\n",
469
+ "\n",
470
+ "1. **Deeper Data Exploration**:\n",
471
+ " - Explore the structure of country-specific JSON files in detail\n",
472
+ " - Analyze the relationships between different value factors\n",
473
+ " - Compare value factors across different regions and impact types\n",
474
+ "\n",
475
+ "2. **Advanced Visualizations**:\n",
476
+ " - Create interactive visualizations using libraries like Plotly\n",
477
+ " - Develop choropleth maps to show global distribution of value factors\n",
478
+ " - Create dashboards for comprehensive data exploration\n",
479
+ "\n",
480
+ "3. **Statistical Analysis**:\n",
481
+ " - Perform correlation analysis between different value factors\n",
482
+ " - Conduct cluster analysis to identify groups of countries with similar profiles\n",
483
+ " - Develop predictive models based on the value factors\n",
484
+ "\n",
485
+ "4. **Policy Implications**:\n",
486
+ " - Analyze how value factors relate to policy decisions\n",
487
+ " - Compare value factors with actual policy implementations\n",
488
+ " - Evaluate the economic implications of different value factors"
489
+ ]
490
+ }
491
+ ],
492
+ "metadata": {
493
+ "kernelspec": {
494
+ "display_name": "Python 3",
495
+ "language": "python",
496
+ "name": "python3"
497
+ },
498
+ "language_info": {
499
+ "codemirror_mode": {
500
+ "name": "ipython",
501
+ "version": 3
502
+ },
503
+ "file_extension": ".py",
504
+ "mimetype": "text/x-python",
505
+ "name": "python",
506
+ "nbconvert_exporter": "python",
507
+ "pygments_lexer": "ipython3",
508
+ "version": "3.8.10"
509
+ }
510
+ },
511
+ "nbformat": 4,
512
+ "nbformat_minor": 4
513
+ }