{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# IFVI Value Factors Data Exploration\n", "\n", "This notebook provides a starting point for exploring and visualizing the IFVI Value Factors dataset. The dataset contains environmental value factors organized by region, impact type, and policy domain.\n", "\n", "## Dataset Overview\n", "\n", "The IFVI Value Factors dataset is organized in multiple ways:\n", "- By region (continental regions, economic zones, development status)\n", "- By impact type (health, ecosystem, economic, social impacts)\n", "- By policy domain (climate, air quality, land use, waste management, water resources)\n", "\n", "Data is available in multiple formats: JSON, CSV, and Parquet." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup and Dependencies\n", "\n", "First, let's import the necessary libraries for data exploration and visualization." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Import standard data analysis libraries\n", "import pandas as pd\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "import json\n", "import os\n", "from pathlib import Path\n", "import glob\n", "\n", "# Set up plotting\n", "plt.style.use('ggplot')\n", "sns.set(style=\"whitegrid\")\n", "%matplotlib inline\n", "plt.rcParams['figure.figsize'] = (12, 8)\n", "\n", "# Display settings for pandas\n", "pd.set_option('display.max_columns', None)\n", "pd.set_option('display.max_rows', 100)\n", "pd.set_option('display.width', 1000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Loading Functions\n", "\n", "Let's define some helper functions to load data from different sources and formats." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define the base data directory\n", "DATA_DIR = Path('../data')\n", "\n", "def load_json_file(file_path):\n", " \"\"\"Load a JSON file into a Python dictionary.\"\"\"\n", " try:\n", " with open(file_path, 'r') as f:\n", " return json.load(f)\n", " except Exception as e:\n", " print(f\"Error loading {file_path}: {e}\")\n", " return None\n", "\n", "def load_csv_file(file_path):\n", " \"\"\"Load a CSV file into a pandas DataFrame.\"\"\"\n", " try:\n", " return pd.read_csv(file_path)\n", " except Exception as e:\n", " print(f\"Error loading {file_path}: {e}\")\n", " return None\n", " \n", "def load_parquet_file(file_path):\n", " \"\"\"Load a Parquet file into a pandas DataFrame.\"\"\"\n", " try:\n", " return pd.read_parquet(file_path)\n", " except Exception as e:\n", " print(f\"Error loading {file_path}: {e}\")\n", " return None\n", " \n", "def get_available_regions():\n", " \"\"\"Get a list of available regions in the dataset.\"\"\"\n", " continental_dir = DATA_DIR / 'by-region' / 'continental'\n", " if continental_dir.exists():\n", " return [d.name for d in continental_dir.iterdir() if d.is_dir()]\n", " return []\n", "\n", "def get_available_impact_types():\n", " \"\"\"Get a list of available impact types in the dataset.\"\"\"\n", " impact_dir = DATA_DIR / 'by-impact-type'\n", " if impact_dir.exists():\n", " return [d.name for d in impact_dir.iterdir() if d.is_dir()]\n", " return []" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exploring the Dataset Structure\n", "\n", "Let's explore the structure of the dataset to understand what's available." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Check available regions\n", "regions = get_available_regions()\n", "print(f\"Available regions: {regions}\")\n", "\n", "# Check available impact types\n", "impact_types = get_available_impact_types()\n", "print(f\"Available impact types: {impact_types}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading Aggregated Data\n", "\n", "Let's start by loading some of the aggregated data files to get an overview of the dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load composite value factors from CSV\n", "composite_csv_path = DATA_DIR / 'aggregated' / 'composite_value_factors.csv'\n", "if composite_csv_path.exists():\n", " composite_df = load_csv_file(composite_csv_path)\n", " if composite_df is not None:\n", " print(\"Composite Value Factors (CSV):\")\n", " display(composite_df.head())\n", "else:\n", " print(f\"File not found: {composite_csv_path}\")\n", " \n", "# Try loading some CSV files from the csv directory\n", "csv_files = list(Path(DATA_DIR / 'csv' / 'by-methodology').glob('*.csv'))\n", "if csv_files:\n", " print(f\"\\nFound {len(csv_files)} CSV files in by-methodology directory\")\n", " for file_path in csv_files[:3]: # Show first 3 files\n", " print(f\"\\nLoading {file_path.name}:\")\n", " df = load_csv_file(file_path)\n", " if df is not None:\n", " display(df.head())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exploring Regional Data\n", "\n", "Let's explore the data for specific regions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Function to get countries in a region\n", "def get_countries_in_region(region):\n", " region_dir = DATA_DIR / 'by-region' / 'continental' / region\n", " if region_dir.exists():\n", " return [f.stem for f in region_dir.glob('*.json')]\n", " return []\n", "\n", "# Check countries in each region\n", "for region in regions:\n", " countries = get_countries_in_region(region)\n", " print(f\"{region}: {len(countries)} countries\")\n", " if countries:\n", " print(f\" Sample countries: {', '.join(countries[:5])}...\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading Country-Specific Data\n", "\n", "Let's load data for a specific country and explore it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Choose a sample country (adjust as needed)\n", "sample_region = regions[0] if regions else None\n", "if sample_region:\n", " countries = get_countries_in_region(sample_region)\n", " sample_country = countries[0] if countries else None\n", " \n", " if sample_country:\n", " country_file = DATA_DIR / 'by-region' / 'continental' / sample_region / f\"{sample_country}.json\"\n", " print(f\"Loading data for {sample_country} in {sample_region}\")\n", " country_data = load_json_file(country_file)\n", " \n", " if country_data:\n", " # Display basic information about the country data\n", " print(f\"\\nData keys: {list(country_data.keys()) if isinstance(country_data, dict) else 'Not a dictionary'}\")\n", " \n", " # Convert to DataFrame if possible for easier exploration\n", " if isinstance(country_data, dict):\n", " try:\n", " # This is a placeholder - adjust based on actual data structure\n", " country_df = pd.json_normalize(country_data)\n", " display(country_df.head())\n", " except Exception as e:\n", " print(f\"Could not convert to DataFrame: {e}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exploring Impact Type Data\n", "\n", "Let's explore data organized by impact type." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Function to get files in an impact type directory\n", "def get_files_in_impact_type(impact_type):\n", " impact_dir = DATA_DIR / 'by-impact-type' / impact_type\n", " if impact_dir.exists():\n", " return list(impact_dir.glob('**/*.*'))\n", " return []\n", "\n", "# Check files in each impact type\n", "for impact_type in impact_types:\n", " files = get_files_in_impact_type(impact_type)\n", " print(f\"{impact_type}: {len(files)} files\")\n", " if files:\n", " print(f\" Sample files: {', '.join(str(f.relative_to(DATA_DIR / 'by-impact-type')) for f in files[:3])}...\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Visualization Examples\n", "\n", "Let's create some example visualizations based on the available data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Example 1: Bar chart comparing values across countries (placeholder)\n", "# Replace with actual data loading and processing based on your exploration\n", "def plot_country_comparison(region, value_factor, n_countries=10):\n", " \"\"\"Plot a comparison of a specific value factor across countries in a region.\"\"\"\n", " # This is a placeholder - replace with actual data loading logic\n", " countries = get_countries_in_region(region)[:n_countries]\n", " \n", " # Placeholder for data - replace with actual data\n", " np.random.seed(42) # For reproducibility\n", " values = np.random.rand(len(countries)) * 100\n", " \n", " plt.figure(figsize=(12, 8))\n", " plt.bar(countries, values)\n", " plt.title(f\"{value_factor} Comparison Across {region} Countries\")\n", " plt.xlabel(\"Country\")\n", " plt.ylabel(f\"{value_factor} Value\")\n", " plt.xticks(rotation=45, ha='right')\n", " plt.tight_layout()\n", " plt.show()\n", "\n", "# Example visualization with placeholder data\n", "if regions:\n", " plot_country_comparison(regions[0], \"CO2 Emissions Value Factor\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Example 2: Heatmap of correlations between different value factors (placeholder)\n", "# Replace with actual data loading and processing\n", "\n", "# Placeholder for correlation data - replace with actual data\n", "np.random.seed(42) # For reproducibility\n", "factor_names = ['CO2', 'PM2.5', 'NOx', 'SOx', 'Land Use', 'Water Consumption']\n", "corr_matrix = np.random.rand(len(factor_names), len(factor_names))\n", "# Make it symmetric for a valid correlation matrix\n", "corr_matrix = (corr_matrix + corr_matrix.T) / 2\n", "np.fill_diagonal(corr_matrix, 1)\n", "\n", "# Create a DataFrame\n", "corr_df = pd.DataFrame(corr_matrix, index=factor_names, columns=factor_names)\n", "\n", "# Plot the heatmap\n", "plt.figure(figsize=(10, 8))\n", "sns.heatmap(corr_df, annot=True, cmap='coolwarm', vmin=-1, vmax=1)\n", "plt.title('Correlation Between Different Value Factors (Example)')\n", "plt.tight_layout()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Geographic Visualization\n", "\n", "Let's create a map visualization to show value factors across different regions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# For geographic visualizations, we'll need additional libraries\n", "# Uncomment and run if needed\n", "# !pip install geopandas matplotlib\n", "\n", "# Example code for geographic visualization\n", "try:\n", " import geopandas as gpd\n", " \n", " # Load world map data\n", " world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))\n", " \n", " # Placeholder for value factor data - replace with actual data\n", " # Here we're just assigning random values to countries\n", " np.random.seed(42) # For reproducibility\n", " world['value_factor'] = np.random.rand(len(world)) * 100\n", " \n", " # Create the plot\n", " fig, ax = plt.subplots(1, 1, figsize=(15, 10))\n", " world.plot(column='value_factor', ax=ax, legend=True,\n", " legend_kwds={'label': \"Value Factor (Example)\",\n", " 'orientation': \"horizontal\"},\n", " cmap='YlOrRd')\n", " ax.set_title('Global Distribution of Value Factors (Example)', fontsize=15)\n", " plt.tight_layout()\n", " plt.show()\n", " \n", "except ImportError:\n", " print(\"To create geographic visualizations, install geopandas:\")\n", " print(\"pip install geopandas matplotlib\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Time Series Analysis\n", "\n", "If the data contains time series information, we can analyze trends over time." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Placeholder for time series data - replace with actual data if available\n", "# Generate example time series data\n", "dates = pd.date_range(start='2015-01-01', end='2024-01-01', freq='M')\n", "np.random.seed(42) # For reproducibility\n", "values = np.cumsum(np.random.randn(len(dates))) + 50 # Random walk with drift\n", "\n", "# Create a DataFrame\n", "ts_df = pd.DataFrame({'Date': dates, 'Value Factor': values})\n", "\n", "# Plot the time series\n", "plt.figure(figsize=(14, 7))\n", "plt.plot(ts_df['Date'], ts_df['Value Factor'])\n", "plt.title('Value Factor Trend Over Time (Example)', fontsize=15)\n", "plt.xlabel('Date')\n", "plt.ylabel('Value Factor')\n", "plt.grid(True)\n", "plt.tight_layout()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Comparative Analysis\n", "\n", "Let's compare value factors across different categories or regions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Placeholder for comparative data - replace with actual data\n", "categories = ['Health Impacts', 'Ecosystem Impacts', 'Economic Impacts', 'Social Impacts']\n", "regions_sample = ['Africa', 'Asia', 'Europe', 'North America', 'South America']\n", "\n", "# Generate random data for demonstration\n", "np.random.seed(42) # For reproducibility\n", "data = np.random.rand(len(regions_sample), len(categories)) * 100\n", "\n", "# Create a DataFrame\n", "comp_df = pd.DataFrame(data, index=regions_sample, columns=categories)\n", "\n", "# Plot the data\n", "comp_df.plot(kind='bar', figsize=(14, 8))\n", "plt.title('Value Factors by Impact Type Across Regions (Example)', fontsize=15)\n", "plt.xlabel('Region')\n", "plt.ylabel('Value Factor')\n", "plt.legend(title='Impact Type')\n", "plt.grid(axis='y')\n", "plt.tight_layout()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Next Steps\n", "\n", "Based on this initial exploration, here are some suggested next steps:\n", "\n", "1. **Deeper Data Exploration**:\n", " - Explore the structure of country-specific JSON files in detail\n", " - Analyze the relationships between different value factors\n", " - Compare value factors across different regions and impact types\n", "\n", "2. **Advanced Visualizations**:\n", " - Create interactive visualizations using libraries like Plotly\n", " - Develop choropleth maps to show global distribution of value factors\n", " - Create dashboards for comprehensive data exploration\n", "\n", "3. **Statistical Analysis**:\n", " - Perform correlation analysis between different value factors\n", " - Conduct cluster analysis to identify groups of countries with similar profiles\n", " - Develop predictive models based on the value factors\n", "\n", "4. **Policy Implications**:\n", " - Analyze how value factors relate to policy decisions\n", " - Compare value factors with actual policy implementations\n", " - Evaluate the economic implications of different value factors" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 4 }