markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Adding a TooltipA `tooltip` can be added to a folium.GeoJson map layer to display data values when the mouse hovers over a feature. | # Double check what columns we have
ca_counties_gdf.columns
?folium.GeoJsonTooltip
# Define the basemap
map1 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map
tiles='CartoDB Positron',
width=1000, # the width & height of the output map
height=600, # in pixels
zoom_start=6) # the zoom level for the data to be displayed
# Add the census tracts gdf layer
folium.GeoJson(ca_counties_gdf,
style_function = lambda x: {
'weight':2,
'color':"white",
'opacity':1,
'fillColor':"red",
'fillOpacity':0.6
},
tooltip=folium.GeoJsonTooltip(
fields=['NAME','POP2012','POP12_SQMI' ],
aliases=['County', 'Population', 'Population Density (mi2)'],
labels=True,
localize=True
),
).add_to(map1)
map1 | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
As always, you can get more help by reading the documentation. | # Uncomment to view help
#folium.GeoJsonTooltip? | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
ExerciseEdit the code in the cell below to `add` the median age(`MED_AGE`) to the tooltip. | # Define the basemap
map1 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map
tiles='CartoDB Positron',
width=1000, # the width & height of the output map
height=600, # in pixels
zoom_start=6) # the zoom level for the data to be displayed
# Add the census tracts gdf layer
folium.GeoJson(ca_counties_gdf,
style_function = lambda x: {
'weight':2,
'color':"white",
'opacity':1,
'fillColor':"red",
'fillOpacity':0.6
},
tooltip=folium.GeoJsonTooltip(
fields=['NAME','POP2012','POP12_SQMI','MED_AGE' ],
aliases=['County', 'Population', 'Population Density (mi2)', 'Median Age'],
labels=True,
localize=True
),
).add_to(map1)
map1 | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
*Click here for answers*<!--- Define the basemapmap1 = folium.Map(location=[37.8721, -122.2578], lat, lon around which to center the map tiles='CartoDB Positron', width=1000, the width & height of the output map height=600, in pixels zoom_start=6) the zoom level for the data to be displayed Add the census tracts gdf layerfolium.GeoJson(ca_counties_gdf, style_function = lambda x: { 'weight':2, 'color':"white", 'opacity':1, 'fillColor':"red", 'fillOpacity':0.6 }, tooltip=folium.GeoJsonTooltip( fields=['FID_','POP2012','POP12_SQMI','MED_AGE' ], aliases=['County ID', 'Population', 'Population Density (mi2)', 'Median Age'], labels=True, localize=True ), ).add_to(map1)map1---> 12.4 Data MappingAbove, we set the style for all of the census tracts to the same fill and outline colors and opacity values. Let's take a look at how we would use the `data values` to set the color values for the polygons. This is called a `choropleth` map or, more generally, a `thematic map`.The `folium.Choropleth` function can be used for this. | # Uncomment to view help docs
## folium.Choropleth? | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
With `folium.Choropleth`, we will use some of the same style parameters that we used with `folium.GeoJson`.We will also use some new parameters, as shown below.First, let's take a look at the data we will map to refresh our knowledge. | print(ca_counties_gdf.columns)
ca_counties_gdf.head(2) | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
Now let's create a choropleth map of total population, which is in the `c_race` column. | ca_counties_gdf.head()
# Define the basemap
map2 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map
tiles='CartoDB Positron',
width=1000, # the width & height of the output map
height=600, # in pixels
zoom_start=6) # the zoom level for the data to be displayed
# Add the Choropleth layer
folium.Choropleth(geo_data=ca_counties_gdf.set_index('NAME'), # The object with the geospatial data
data=ca_counties_gdf, # The object with the attribute data (can be same)
columns=['NAME','POP2012'], # the ID and data columns in the data objects
key_on="feature.id", # the ID in the geo_data object (don't change)
fill_color="Reds", # The color palette (or color map) - see help
fill_opacity=0.65,
line_color="grey",
legend=True,
legend_name="Population",
).add_to(map2)
# Display the map
map2 | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
Choropleth Mapping with Folium - discussionLet's discuss the following lines from the code above in more detail. Add the Choropleth layerfolium.Choropleth(geo_data=ca_counties_gdf.set_index('NAME'), data=ca_counties_gdf, columns=['NAME','POP2012'], key_on="feature.id", fill_color="Reds", ...)`geo_data` and the `data`: we need to identify the objects that contains both because they could be different objects. In our example they are in the same object.`ca_counties_gdf.set_index('NAME')`: We need to **set_index('NAME')** in order to identify the column in `geo_data` that will be used to `join` the geometries in the `geo_data` to the data values in `data`.`columns=['NAME','POP2012']`: we identify in `data` (1) the column that will join these `data` to `geo_data` and (2) the second column is the column with the values that will determine the color.`fill_color="Reds":` Here we identify the name of the color palette that we will use to style the polygons. These will be the same as the `matplotlib` colormaps. QuestionRecall our discussion about best practices for choropleth maps. Is population count an appropriate variable to plot as a choropleth? | # Write your thoughts here | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
ExerciseCopy and paste the code from above into the cell below to create a choropleth map of population density (`POP12_SQMI`).Feel free to experiment with any of the `folium.Choropleth` style parameters, especially the `fill_color` which needs to be one of the `color brewer palettes` listed below:fill_color: string, default 'blue' Area fill color. Can pass a hex code, color name, or if you are binding data, one of the following color brewer palettes: 'BuGn', 'BuPu', 'GnBu', 'OrRd', 'PuBu', 'PuBuGn', 'PuRd', 'RdPu', 'YlGn', 'YlGnBu', 'YlOrBr', and 'YlOrRd'. | # Your code here
# Define the basemap
map2 = folium.Map(location=[37.7749, -122.4194], # lat, lon around which to center the map
tiles='Stamen Toner',
width=1000, # the width & height of the output map
height=600, # in pixels
zoom_start=10) # the zoom level for the data to be displayed
# Add the Choropleth layer
folium.Choropleth(geo_data=ca_counties_gdf.set_index('NAME'), # The object with the geospatial data
data=ca_counties_gdf, # The object with the attribute data (can be same)
columns=['NAME','POP12_SQMI'], # the ID and data columns in the data objects
key_on="feature.id", # the ID in the geo_data object (don't change)
fill_color="RdPu", # The color palette (or color map) - see help
fill_opacity=0.8).add_to(map2)
map2 | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
*Click here for answers*<!--- SOLUTION Get our map center ctrX = (tracts_gdf.total_bounds[0] + tracts_gdf.total_bounds[2])/2 ctrY = (tracts_gdf.total_bounds[1] + tracts_gdf.total_bounds[3])/2 Create our base map map2 = folium.Map(location=[ctrY, ctrX], tiles='CartoDB Positron', width=800,height=600, zoom_start=10) Add the Choropleth layer folium.Choropleth(geo_data=tracts_gdf.set_index('GEOID'), data=tracts_gdf, columns=['GEOID','pop_dens_km2'], key_on="feature.id", fill_color="PuBu", fill_opacity=0.65, line_color="grey", legend=True, legend_name="Population Density per km2", ).add_to(map2) Display map2---> Choropleth Maps with TooltipsYou can add a `tooltip` to a folium.Choropleth map but the process is not straigthforward. The `folium.Choropleth` function does not have a tooltip argument the way `folium.GeoJson` does.The workaround is to add the layer as both a `folium.Choropleth` layer and as a `folium.GeoJson` layer and bind the tooltip to the GeoJson layer.Let's check it out below. | # Define the basemap
map3 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map
tiles='CartoDB Positron',
width=1000, # the width & height of the output map
height=600, # in pixels
zoom_start=6) # the zoom level for the data to be displayed
# Add the Choropleth layer
folium.Choropleth(geo_data=ca_counties_gdf.set_index('NAME'), # The object with the geospatial data
data=ca_counties_gdf, # The object with the attribute data (can be same)
columns=['NAME','POP2012'], # the ID and data columns in the data objects
key_on="feature.id", # the ID in the geo_data object (don't change)
fill_color="Reds", # The color palette (or color map) - see help
fill_opacity=0.65,
line_color="grey",
legend=True,
legend_name="Population",
).add_to(map3)
# ADD the same geodataframe to the map to display a tooltip
layer2 = folium.GeoJson(ca_counties_gdf,
style_function=lambda x: {'color':'transparent','fillColor':'transparent'},
tooltip=folium.GeoJsonTooltip(
fields=['NAME','POP2012'],
aliases=['County', 'Population'],
labels=True,
localize=True
),
highlight_function=lambda x: {'weight':3,'color':'white'}
).add_to(map3)
map3 # show map | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
Question Do you notice anything different about the `style_function` for layer2 above? ExerciseRedo the above choropleth map code to map population density. Add both population and population density to the tooltip. Don't forget to update the legend name. | # Your code here | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
12.5 OverlaysWe can overlay other geospatial data on our folium maps.Let's say we want to focus the previous choropleth map with tooltips (`map3`) on the City of Berkeley. We can fetch the border of the city from our census Places dataset. These data can be downloaded from the Census website. We use the cartographic boundary files not the TIGER line files as these look better on a map (clipped to shoreline). Specifically, we will fetch the city boundaries from the following census cartographic boundary file:- https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_06_place_500k.zipThen we can overlay the border of the city on the map and set the initial zoom to the center of the Berkeley boundary.Let's try that. First we need to read in the census places data and create a subset geodataframe for our city of interest, here Berkeley. | places = gpd.read_file("zip://notebook_data/census/Places/cb_2018_06_place_500k.zip")
places.head(2)
berkeley = places[places.NAME=='Berkeley'].copy()
berkeley.head(2) | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
Plot the Berkeley geodataframe to make sure it looks ok. | berkeley.plot()
# Create a new map centered on Berkeley
berkeley_map = folium.Map(location=[berkeley.centroid.y.mean(),
berkeley.centroid.x.mean()],
tiles='CartoDB Positron',
width=800,height=600,
zoom_start=13)
# Add the census tract polygons as a choropleth map
layer1=folium.Choropleth(geo_data=ca_counties_gdf.set_index('NAME'),
data=ca_counties_gdf,
columns=['NAME','POP2012'],
fill_color="Reds",
fill_opacity=0.65,
line_color="grey", #"white",
line_weight=1,
line_opacity=1,
key_on="feature.id",
legend=True,
legend_name="Population",
highlight=True
).add_to(berkeley_map)
# Add the berkeley boundary - note the fill color
layer2 = folium.GeoJson(data=berkeley,
name='Berkeley',smooth_factor=2,
style_function=lambda x: {'color':'black',
'opacity':1,
'fillColor':
'transparent',
'weight':3},
).add_to(berkeley_map)
# Add the tooltip for the census tracts as its own layer
layer3 = folium.GeoJson(ca_counties_gdf,
style_function=lambda x: {'color':'transparent','fillColor':'transparent'},
tooltip=folium.features.GeoJsonTooltip(
fields=['NAME','POP2012'],
aliases=['County', 'Population'],
labels=True,
localize=True
),
highlight_function=lambda x: {'weight':3,'color':'white'}
).add_to(berkeley_map)
berkeley_map # show map | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
QuestionsAny questions about the above map?Does the code for the Berkeley map above differ from our previous choropleth map code?Does the order of layer2 & layer3 matter (can they be switched?) ExerciseRedo the above map with population density. Create and display the Oakland city boundary on the map instead of Berkeley and center the map on Oakland. | # Your code here | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
*Click here for solution*<!--- SOLUTION oakland = places[places.NAME=='Oakland'].copy() oakland.plot() SOLUTION oakland_map = folium.Map(location=[oakland.centroid.y.mean(), oakland.centroid.x.mean()], tiles='CartoDB Positron', width=800,height=600, zoom_start=12) Add the census tract polygons as a choropleth map layer1=folium.Choropleth(geo_data=ca_counties_gdf.set_index('NAME'), data=ca_counties_gdf, columns=['NAME','POP2012'], fill_color="Reds", fill_opacity=0.65, line_color="grey", "white", line_weight=1, line_opacity=1, key_on="feature.id", legend=True, legend_name="Population", highlight=True ).add_to(oakland_map) Add the oakland boundary layer2 = folium.GeoJson(data=oakland, name='Oakland',smooth_factor=2, style_function=lambda x: {'color':'black','opacity':1,'fillColor':'transparent','weight':3}, ).add_to(oakland_map) Add the tooltip layer3 = folium.GeoJson(ca_counties_gdf, style_function=lambda x: {'color':'transparent','fillColor':'transparent'}, tooltip=folium.features.GeoJsonTooltip( fields=['NAME','POP2012'], aliases=['County', 'Population'], labels=True, localize=True ), highlight_function=lambda x: {'weight':3,'color':'white'} ).add_to(oakland_map) oakland_map show map---> 12.6 Mapping Points and LinesWe can also add points and lines to a folium map.Let's overlay BART stations as points and BART lines as lines to the interactive map. For the Bay Area these are data are available from the [Metropoliton Transportation Commission (MTC) Open Data portal](http://opendata.mtc.ca.gov/datasets).We're going to try pulling in BART station data that we downloaded from the website and subsetted from the passenger-rail-stations. You can learn more about the dataset through here: http://opendata.mtc.ca.gov/datasets/passenger-rail-stations-2019As usual, let's try pulling in the data and inspect the first couple of rows. | # Load light rail stop data
railstops = gpd.read_file("zip://notebook_data/transportation/Passenger_Rail_Stations_2019.zip")
railstops.tail()
# Subset to keep just bart stations
bart_stations = railstops[railstops['agencyname']=='BART'].sort_values(by="station_na")
bart_stations.head()
# Repeat for the rail lines
rail_lines = gpd.read_file("zip://notebook_data/transportation/Passenger_Railways_2019.zip")
rail_lines.head()
rail_lines.operator.value_counts()
# subset by operator to get the bart lines
bart_lines = rail_lines[rail_lines['operator']=='BART']
# Check the CRS of the geodataframes
print(bart_stations.crs)
print(bart_lines.crs)
# Quick plot
bart_stations.plot()
bart_lines.plot() | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
Now that we have fetched and checked the Bart data, let's do a quick folium map with it.We will use `folium.GeoJson` to add these data to the map, just as we used it previously for the census tract polygons. | # Bart Map
map4 = folium.Map(location=[bart_stations.centroid.y.mean(), bart_stations.centroid.x.mean()],
tiles='CartoDB Positron',
width=800,height=600,
zoom_start=10)
folium.GeoJson(bart_lines).add_to(map4)
folium.GeoJson(bart_stations).add_to(map4)
map4 # show map | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
We can also add tooltips, just as we did previously. | # Bart Map
map4 = folium.Map(location=[bart_stations.centroid.y.mean(), bart_stations.centroid.x.mean()],
tiles='CartoDB Positron',
#width=800,height=600,
zoom_start=10)
# Add Bart lines
folium.GeoJson(bart_lines,
tooltip=folium.GeoJsonTooltip(
fields=['operator' ],
aliases=['Line operator'],
labels=True,
localize=True
),
).add_to(map4)
# Add Bart stations
folium.GeoJson(bart_stations,
tooltip=folium.GeoJsonTooltip(fields=['ts_locatio'],
aliases=['Stop Name'],
labels=True,
localize=True
),
).add_to(map4)
map4 # show map | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
That's pretty cool, but don't you just want to click on those marker points to get a `popup` rather than hovering over for a `tooltip`? Mapping PointsSo far we have used `folium.GeoJson` to map our BART points. By default this uses the push-pin marker symbology made popular by Google Maps. Under the hood, folium.GeoJson uses the default object type `folium.Marker` when the input data are points.This is helpful to know because `folium.Marker` has a few options that allow further customization of our points. | # Uncomment to view help docs
folium.Marker? | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
Let's explicitly add the Bart Stations as points so we can change the `tooltips` to `popups`. | # Bart Map
map4 = folium.Map(location=[bart_stations.centroid.y.mean(), bart_stations.centroid.x.mean()],
tiles='CartoDB Positron',
#width=800,height=800,
zoom_start=10)
# Add Bart lines
folium.GeoJson(bart_lines,
tooltip=folium.GeoJsonTooltip(
fields=['operator' ],
aliases=['Line operator'],
labels=True,
localize=True
),
).add_to(map4)
# Add Bart stations
bart_stations.apply(lambda row:
folium.Marker(
location=[row['geometry'].y, row['geometry'].x],
popup=row['ts_locatio'],
).add_to(map4), axis=1)
map4 # show map | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
That `folium.Marker` code is a bit more complex than `folium.GeoJson` and may not be worth it unless you really want that popup behavior.But let's see what else we can do with a `folium.Marker` by viewing the next map. | # Bart Map
map4 = folium.Map(location=[bart_stations.centroid.y.mean(), bart_stations.centroid.x.mean()],
tiles='CartoDB Positron',
#width=800,height=600,
zoom_start=10)
# Add BART lines
folium.GeoJson(bart_lines,
tooltip=folium.GeoJsonTooltip(
fields=['operator' ],
aliases=['Line operator'],
labels=True,
localize=True
),
).add_to(map4)
# Add BART Stations
icon_url = "https://gomentumstation.net/wp-content/uploads/2018/08/Bay-area-rapid-transit-1000.png"
bart_stations.apply(lambda row:
folium.Marker(
location=[row['geometry'].y,row['geometry'].x],
popup=row['ts_locatio'],
icon=folium.features.CustomIcon(icon_url,icon_size=(20, 20)),
).add_to(map4), axis=1)
map4 # show map | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
ExerciseCopy and paste the code for the previous cell into the next cell and 1. change the bart icon to "https://ya-webdesign.com/transparent450_/train-emoji-png-14.png"2. change the popup back to a tooltip. | # Your code here | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
*Click here for solution*<!--- Bart Mapmap4 = folium.Map(location=[bart_stations.centroid.y.mean(), bart_stations.centroid.x.mean()], tiles='CartoDB Positron', width=800,height=600, zoom_start=10) Add BART linesfolium.GeoJson(bart_lines, tooltip=folium.GeoJsonTooltip( fields=['operator' ], aliases=['Line operator'], labels=True, localize=True ), ).add_to(map4) Add BART Stationsicon_url = "https://ya-webdesign.com/transparent450_/train-emoji-png-14.png"bart_stations.apply(lambda row: folium.Marker( location=[row['geometry'].y,row['geometry'].x], tooltip=row['ts_locatio'], icon=folium.features.CustomIcon(icon_url,icon_size=(20, 20)), ).add_to(map4), axis=1)map4 show map---> folium.CircleMarkersYou may prefer to customize points as `CircleMarkers` instead of the icon or pushpin Marker style. This allows you to set size and color of a marker, either manually or as a function of a data variable.Let's look at some code for doing this. | # Define the basemap
map5 = folium.Map(location=[bart_stations.centroid.y.mean(), bart_stations.centroid.x.mean()], # lat, lon around which to center the map
tiles='CartoDB Positron',
#width=1000, # the width & height of the output map
#height=600, # in pixels
zoom_start=10) # the zoom level for the data to be displayed
# Add BART Lines
folium.GeoJson(bart_lines).add_to(map5)
# Add BART Stations
bart_stations.apply(lambda row:
folium.CircleMarker(
location=[row['geometry'].y, row['geometry'].x],
radius=10,
color='purple',
fill=True,
fill_color='purple',
popup=row['ts_locatio'],
).add_to(map5),
axis=1)
map5
| _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
folium.Circle You can also set the size of your circles to a fixed radius, in meters, using `folium.Circle`. This is great for exploratory data analysis. For example, you can see what the census tract values are within 500 meters of a BART station. | # Uncomment to view
#?folium.Circle
# Define the basemap
map5 = folium.Map(location=[bart_stations.centroid.y.mean(), bart_stations.centroid.x.mean()], # lat, lon around which to center the map
tiles='CartoDB Positron',
#width=1000, # the width & height of the output map
#height=600, # in pixels
zoom_start=10) # the zoom level for the data to be displayed
# Add BART Lines
folium.GeoJson(bart_lines).add_to(map5)
# Add BART Stations
bart_stations.apply(lambda row:
folium.Circle(
location=[row['geometry'].y, row['geometry'].x],
radius=500,
color='purple',
fill=True,
fill_color='purple',
popup=row['ts_locatio'],
).add_to(map5),
axis=1)
map5
| _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
QuestionWhat do you notice about the size of the circles as you zoom in/out when you compare folium.Circles and folium.CircleMarkers? Proportional Symbol MapsOne of the advantages of the `folium.CircleMarker` is that we can set the size of the map to vary based on a data value.To give this a try, let's add a fake column to the `bart_stations` gdf called millions_served and set it to a value between 1 and 10. | # add a column to the bart stations gdf
bart_stations['millions_served'] = np.random.randint(1,10, size=len(bart_stations))
bart_stations.head()
# Define the basemap
map5 = folium.Map(location=[bart_stations.centroid.y.mean(), bart_stations.centroid.x.mean()],
tiles='CartoDB Positron',
#width=1000, # the width & height of the output map
#height=600, # in pixels
zoom_start=10) # the zoom level for the data to be displayed
folium.GeoJson(bart_lines).add_to(map5)
# Add BART Stations as CircleMarkers
# Here, some knowlege of Python string formatting is useful
bart_stations.apply(lambda row:
folium.CircleMarker(
location=[row['geometry'].y, row['geometry'].x],
radius=row['millions_served'],
color='purple',
fill=True,
fill_color='purple',
tooltip = "Bart Station: %s<br>Millions served: %s" % (row['ts_locatio'], row['millions_served'])
).add_to(map5), axis=1)
map5
| _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
So if you hover over our BART stations, you see that we've formatted it nicely! Using some HTML and Python string formatting we can make our `tooltip` easier to read. If you want to learn more about customizing these, you can [go check this out to learn HTML basics](https://www.w3schools.com/html/html_basic.asp). You can then [go here to learn about Python string formatting](https://python-reference.readthedocs.io/en/latest/docs/str/formatting.html). 12.7 Creating and Saving a folium Interactive MapNow that you have seen most of the ways you can add a geodataframe to a folium map, let's create one big map that includes several of our geodataframes.To control the display of the data layers, we will add a `folium.LayerControl`- A `folium.LayerControl` will allow you to toggle on/off a map's visible layers. - In order to add a layer to the LayerControl, the layer must have value set for its `name`.Let's take a look. | # Create a new map centered on the census tract data
map6 = folium.Map(location=[bart_stations.centroid.y.mean(), bart_stations.centroid.x.mean()],
tiles='CartoDB Positron',
#width=800,height=600,
zoom_start=10)
# Add the counties polygons as a choropleth map
layer1=folium.Choropleth(geo_data=ca_counties_gdf.set_index('NAME'),
data=ca_counties_gdf,
columns=['NAME','POP2012'],
fill_color="Reds",
fill_opacity=0.65,
line_color="grey", #"white",
line_weight=1,
line_opacity=1,
key_on="feature.id",
legend=True,
legend_name="Population",
highlight=True,
name="Counties"
).add_to(map6)
# Add the tooltip for the counties as its own layer
# Don't display in the Layer control!
layer2 = folium.GeoJson(ca_counties_gdf,
style_function=lambda x: {'color':'transparent','fillColor':'transparent'},
tooltip=folium.features.GeoJsonTooltip(
fields=['NAME','POP2012'],
aliases=['Name', 'Population'],
labels=True,
localize=True
),
highlight_function=lambda x: {'weight':3,'color':'white'}
).add_to(layer1.geojson)
# Add Bart lines
folium.GeoJson(bart_lines,
name="Bart Lines",
tooltip=folium.GeoJsonTooltip(
fields=['operator' ],
aliases=['Line operator'],
labels=True,
localize=True
),
).add_to(map6)
# Add Bart stations
folium.GeoJson(bart_stations,
name="Bart stations",
tooltip=folium.GeoJsonTooltip(fields=['ts_locatio' ],
aliases=['Stop Name'],
labels=True,
localize=True
),
).add_to(map6)
# ADD LAYER CONTROL
folium.LayerControl(collapsed=False).add_to(map6)
map6 # show map | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
Questions1. Take a look at the help docs `folium.LayerControl?`. What parameter would move the location of the LayerControl? What parameter would allow it to be closed by default?2. Take a look at the way we added `layer2` above (this has the census tract tooltips). How has the code we use to add the layer to the map changed? Why do you think we made this change? | # Uncomment to view
#folium.LayerControl? | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
Saving to an html fileBy saving our map to a html we can use it later as something to add to a website or email to a colleague.You can save any of the maps you have in the notebook using this syntax:> map_name.save("file_name.html")Let's try that. | map6.save('outdata/bartmap.html') | _____no_output_____ | MIT | 12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb | reeshav-netizen/Geospatial-Fundamentals-in-Python |
Rasters for a single spikeglx session- Load an exctractor for visualization of the data- Load the sorts as in notebook sglx_pipe-dev-sort-rasters--z_w12m7_20-20201104- load the mot_dict- plot rasters- export to npy for brad SGL spikeextractor needs spikeextractors==0.9.3, spikeinterface==0.12.0. Will break with other versions.TODO: make sure my spikeglxrecordingextractor works with newer spikeextractors or get rid of it and adapt theirs.(the why i did my own is because theirs had an obscure way of reading the digital channels in the nidaqs). | %matplotlib inline
import os
import glob
import logging
import numpy as np
import pandas as pd
from scipy.io import wavfile
from scipy import signal
import pickle
from matplotlib import pyplot as plt
from importlib import reload
logger = logging.getLogger()
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
from ceciestunepipe.file import filestructure as et
from ceciestunepipe.util import sglxutil as sglu
from ceciestunepipe.util.spikeextractors.extractors.spikeglxrecordingextractor import readSGLX as rsgl
from ceciestunepipe.util.spikeextractors.extractors.spikeglxrecordingextractor import spikeglxrecordingextractor as sglex
import spikeinterface as si
import spikeinterface.extractors as se
import spikeinterface.toolkit as st
import spikeinterface.sorters as ss
import spikeinterface.comparison as sc
import spikeinterface.widgets as sw
logger.info('all modules loaded')
reload(et)
sess_par = {'bird': 'z_w12m7_20',
'sess': '20201104',
'probe': 'probe_0', # probe to sort ('probe_0', 'probe_1') (to lookup in the rig_par which port to extract)
'sort': 2}
exp_struct = et.get_exp_struct(sess_par['bird'], sess_par['sess'], sess_par['sort'])
ksort_folder = exp_struct['folders']['ksort']
raw_folder = exp_struct['folders']['raw']
sess_epochs = sglu.list_sgl_epochs(sess_par)
sess_epochs
### pick a session
reload(et)
reload(sglu)
epoch = sess_epochs[1] # g2 is the shortest
exp_struct = sglu.sgl_struct(sess_par, epoch)
sgl_folders, sgl_files = sglu.sgl_file_struct(exp_struct['folders']['raw'])
files_pd = pd.DataFrame(sgl_files) | _____no_output_____ | MIT | sglx_pipe_rasters-z_w12m7_20-20201104.ipynb | zekearneodo/ceciestunepipe |
get the recordings just in case | probe_id = int(sess_par['probe'].split('_')[-1])
i_run = 0
run_meta_files = {k: v[i_run] for k, v in sgl_files.items()}
run_recordings = {k: sglex.SpikeGLXRecordingExtractor(sglu.get_data_meta_path(v)[0]) for k, v in run_meta_files.items()}
| _____no_output_____ | MIT | sglx_pipe_rasters-z_w12m7_20-20201104.ipynb | zekearneodo/ceciestunepipe |
load the sort and the motif dictionary | from ceciestunepipe.util.spike import kilosort as ks
from ceciestunepipe.util.sound import spectral as sp
from ceciestunepipe.util import plotutil as pu
plt.rcParams['lines.linewidth'] = 0.1
axes_pars = {'axes.labelpad': 5,
'axes.titlepad': 5,
'axes.titlesize': 'small',
'axes.grid': False,
'axes.xmargin': 0,
'axes.ymargin': 0}
plt.rcParams.update(axes_pars) | _____no_output_____ | MIT | sglx_pipe_rasters-z_w12m7_20-20201104.ipynb | zekearneodo/ceciestunepipe |
load sort | spike_pickle_path = os.path.join(exp_struct['folders']['processed'], 'spk_df.pkl')
clu_pickle_path = os.path.join(exp_struct['folders']['processed'], 'clu_df.pkl')
spk_df = pd.read_pickle(spike_pickle_path)
clu_df = pd.read_pickle(clu_pickle_path) | _____no_output_____ | MIT | sglx_pipe_rasters-z_w12m7_20-20201104.ipynb | zekearneodo/ceciestunepipe |
load motif dictionary | mot_dict_path = os.path.join(exp_struct['folders']['processed'], 'mot_dict.pkl')
logger.info('Loading mot_dict from {}'.format(mot_dict_path))
with open(mot_dict_path, 'rb') as handle:
mot_dict = pickle.load(handle)
mot_dict | 2021-08-27 14:23:39,729 root INFO Loading mot_dict from /mnt/cube/earneodo/bci_zf/neuropix/birds/z_w12m7_20/Ephys/processed/20201104/2500r250a_3500_dir_g0/mot_dict.pkl
| MIT | sglx_pipe_rasters-z_w12m7_20-20201104.ipynb | zekearneodo/ceciestunepipe |
make a raster | ## the start times synched to the spike time base (ap_0, comes from sglx_pipe-dev-sort-rasters notebook)
mot_samples = mot_dict['start_sample_ap_0']
mot_s_f = mot_dict['s_f']
ap_s_f = mot_dict['s_f_ap_0']
mot_samples
## get the actural raster for some clusters
def get_window_spikes(spk_df, clu_list, start_sample, end_sample):
onset = start_sample
offset = end_sample
spk_t = spk_df.loc[spk_df['times'].between(onset, offset, inclusive=False)]
spk_arr = np.zeros((clu_list.size, offset - onset))
for i, clu_id in enumerate(clu_list):
clu_spk_t = spk_t.loc[spk_t['clusters']==clu_id, 'times'].values
spk_arr[i, clu_spk_t - onset] = 1
return spk_arr
def get_rasters(spk_df, clu_list, start_samp_arr, span_samples):
# returns np.array([n_clu, n_sample, n_trial])
# get the window spikes for all of the clusters, for each of the start_samp_arr
spk_arr_list = [get_window_spikes(spk_df, clu_list, x, x+span_samples) for x in start_samp_arr]
return np.stack(spk_arr_list, axis=-1) | _____no_output_____ | MIT | sglx_pipe_rasters-z_w12m7_20-20201104.ipynb | zekearneodo/ceciestunepipe |
collect all good, ra units | t_pre = - 0.5
t_post = 1.5
t_pre_samp = int(t_pre * ap_s_f)
t_post_samp = int(t_post * ap_s_f)
clu_list = np.unique(clu_df.loc[(clu_df['KSLabel']=='good') & (clu_df['nucleus'].isin(['ra'])),
'cluster_id'])
rast_arr = get_rasters(spk_df, clu_list, mot_dict['start_sample_ap_0'] + t_pre_samp, t_post_samp - t_pre_samp)
def plot_as_raster(x, ax=None, t_0=None):
#x is [n_events, n_timestamps] array
n_y, n_t = x.shape
row = np.ones(n_t) + 1
t = np.arange(n_t)
col = np.arange(n_y)
frame = col[:, np.newaxis] + row[np.newaxis, :]
x[x==0] = np.nan
if ax is None:
fig, ax = plt.subplots()
raster = ax.scatter(t * x, frame * x, marker='.', facecolor='k', s=1, rasterized=False)
if t_0 is not None:
ax.axvline(x=t_0, color='red')
return ax
spk_arr = get_window_spikes(spk_df, clu_list, int(ap_start + pre_sec*ap_sf), int(ap_start + post_sec*ap_sf))
fig, ax = plt.subplots(nrows=2, gridspec_kw={'height_ratios': [1, 10]}, figsize=(10, 22))
f, t, sxx = sp.ms_spectrogram(mic_arr.flatten(), nidq_sf)
#ax[0].plot(mic_arr.flatten())
ax[0].pcolormesh(t, f, np.log(sxx), cmap='inferno')
plot_as_raster(spk_arr, t_0=int(-pre_sec*ap_sf), ax=ax[1])
plt.tight_layout()
fig, ax_arr = plt.subplots(nrows=10, figsize=[10, 15], sharex=True)
for i_rast, clu_idx in enumerate(range(20, 30)):
#one_raster_ms = coarse(rast_arr[clu_idx].T, samples_in_ms)
#plt.imshow(one_raster_ms[::-1], aspect='auto', cmap='inferno')
plot_as_raster(rast_arr[clu_idx].T, t_0=-t_pre_samp, ax=ax_arr[i_rast]) | _____no_output_____ | MIT | sglx_pipe_rasters-z_w12m7_20-20201104.ipynb | zekearneodo/ceciestunepipe |
export to npy arrays | def export_spikes_array(spk_df, clu_list, start_samples, span_samples, file_path, bin_size=None):
# get the raster for the clu_list
# if necessary, bin it
# save it as numpy
rast_arr = get_rasters(spk_df, clu_list, start_samples, span_samples)
if bin_size:
logger.info('Getting binned spikes with {} sample bins'.format(bin_size))
rate_arr = pu.coarse(np.transpose(rast_arr, axes=[0, 2, 1]), n_coarse=bin_size)
# switch back axes to [clu, t, trial]
export_arr = np.transpose(rate_arr, axes=[0, 2, 1])
#export_arr = rate_arr
else:
export_arr = rast_arr
logger.info('saving spikes as {}'.format(file_path))
np.save(file_path, export_arr)
return export_arr
rast_arr = get_rasters(spk_df, clu_list, mot_dict['start_sample_ap_0'] + t_pre_samp, t_post_samp - t_pre_samp)
mot_len = mot_dict['template'].size
mot_len_s = mot_len / mot_s_f
t_pre = - 0.5
t_post = 0.5 + mot_len_s
bin_ms = 0
t_pre_samp = int(t_pre * ap_s_f)
t_post_samp = int(t_post * ap_s_f)
bin_samp = int(bin_ms * ap_s_f * 0.001)
spk_arr_list = []
for nucleus in ['hvc', 'ra']:
# get the cluster list
clu_list = np.unique(clu_df.loc[(clu_df['KSLabel']=='good') & (clu_df['nucleus'].isin([nucleus])),
'cluster_id'])
# make the file path
file_path = os.path.join(exp_struct['folders']['processed'],
'fr_arr-{}-{}ms.pkl'.format(nucleus, bin_ms))
logger.info('saving spikes as {}'.format(file_path))
# get the spikes to the file
spk_arr = export_spikes_array(spk_df,
clu_list,
mot_dict['start_sample_ap_0'] + t_pre_samp,
t_post_samp - t_pre_samp,
file_path,
bin_samp)
spk_arr_list.append(spk_arr)
spk_arr.shape | _____no_output_____ | MIT | sglx_pipe_rasters-z_w12m7_20-20201104.ipynb | zekearneodo/ceciestunepipe |
plot one spk_arr together with a motif | spk_arr = spk_arr_list[1]
plt.imshow(spk_arr[32, :, :].T, aspect='auto', cmap='inferno')
np.transpose(spk_arr, axes=[0, 2, 1]).shape
plt.plot(spk_arr[0].sum(axis=1))
np.transpose(rast_arr, axes=[0, 2, 1]).shape
spk_arr.shape
mot_len = mot_dict['template'].size
mot_len_s = mot_len / mot_s_f
t_pre = - 0.5
t_post = 0.5 + mot_len_s
bin_ms = 2
t_pre_samp = int(t_pre * ap_s_f)
t_post_samp = int(t_post * ap_s_f)
bin_samp = int(bin_ms * ap_s_f * 0.001)
mot_len_s
fr_arr = pu.coarse(np.transpose(rast_arr, axes=[0, 2, 1]), n_coarse=bin_samp)
fr_arr.shape
fig, ax_arr = plt.subplots(nrows=10, figsize=[10, 15], sharex=True)
for i_rast, clu_idx in enumerate(range(50, 60)):
#one_raster_ms = coarse(rast_arr[clu_idx].T, samples_in_ms)
#plt.imshow(one_raster_ms[::-1], aspect='auto', cmap='inferno')
plot_as_raster(spk_arr[clu_idx].T, t_0=-t_pre_samp, ax=ax_arr[i_rast]) | _____no_output_____ | MIT | sglx_pipe_rasters-z_w12m7_20-20201104.ipynb | zekearneodo/ceciestunepipe |
Laboratorio 9 | import pandas as pd
import altair as alt
import matplotlib.pyplot as plt
from vega_datasets import data
alt.themes.enable('opaque')
%matplotlib inline | _____no_output_____ | MIT | labs/lab09.ipynb | Amandaa-S/mat281_portfolio |
En este laboratorio utilizaremos un conjunto de datos _famoso_, el GapMinder. Esta es una versión reducida que solo considera países, ingresos, salud y población. ¿Hay alguna forma natural de agrupar a estos países? | gapminder = data.gapminder_health_income()
gapminder.head() | _____no_output_____ | MIT | labs/lab09.ipynb | Amandaa-S/mat281_portfolio |
Ejercicio 1(1 pto.)Realiza un Análisis exploratorio, como mínimo un `describe` del dataframe y una visualización adecuada, por ejemplo un _scatter matrix_ con los valores numéricos. | gapminder.describe()
#scatter matrix con valores númerico
alt.Chart(gapminder).mark_circle().encode(
alt.X(alt.repeat("column"), type='quantitative'),
alt.Y(alt.repeat("row"), type='quantitative'),
).properties(
width=150,
height=150
).repeat(
row=['income', 'health', 'population'],
column=['population', 'health', 'income']
).interactive() | _____no_output_____ | MIT | labs/lab09.ipynb | Amandaa-S/mat281_portfolio |
__Pregunta:__ ¿Hay alguna variable que te entregue indicios a simple vista donde se puedan separar países en grupos?__Respuesta:__ En la variable población se pueden ver 3 grupos, en la variable health se observan 2 grupos y en la variable income se pueden ver 3 grupos distintos. Luego, en los otros gráficos:En el gráfico population vs income se puede apreciar 3 grupos, uno con gran cantidad de paises con una baja población y bajos ingresos, otro con dos países con mayor cantidad de habitantes y por último paises con mayor cantidad de ingresos.En el grafico health vs population se pueden apreciar 3 grupos que se van diferenciando con la cantidad de habitantesPor último en el gráfico income vs health se observan 3 grupos, diferenciandose con los ingresos por pais Ejercicio 2(1 pto.)Aplicar un escalamiento a los datos antes de aplicar nuestro algoritmo de clustering. Para ello, definir la variable `X_raw` que corresponde a un `numpy.array` con los valores del dataframe `gapminder` en las columnas _income_, _health_ y _population_. Luego, definir la variable `X` que deben ser los datos escalados de `X_raw`. | from sklearn.preprocessing import StandardScaler
X_raw = pd.DataFrame({"income": gapminder["income"],"health": gapminder["health"],"population":gapminder["population"]}).to_numpy()
X = StandardScaler().fit_transform(X_raw) | _____no_output_____ | MIT | labs/lab09.ipynb | Amandaa-S/mat281_portfolio |
Ejercicio 3(1 pto.)Definir un _estimator_ `KMeans` con `k=3` y `random_state=42`, luego ajustar con `X` y finalmente, agregar los _labels_ obtenidos a una nueva columna del dataframe `gapminder` llamada `cluster`. Finalmente, realizar el mismo gráfico del principio pero coloreado por los clusters obtenidos. | from sklearn.cluster import KMeans
k = 3
kmeans = KMeans(n_clusters=k,random_state=42)
kmeans.fit(X)
clusters = kmeans.labels_
gapminder["cluster"] = clusters
alt.Chart(gapminder).mark_circle().encode(
alt.X(alt.repeat("column"), type='quantitative'),
alt.Y(alt.repeat("row"), type='quantitative'),
color='cluster:N'
).properties(
width=150,
height=150
).repeat(
row=['income', 'health', 'population'],
column=['population', 'health', 'income']
).interactive() | _____no_output_____ | MIT | labs/lab09.ipynb | Amandaa-S/mat281_portfolio |
Ejercicio 4(1 pto.)__Regla del codo____¿Cómo escoger la mejor cantidad de _clusters_?__En este ejercicio hemos utilizado que el número de clusters es igual a 3. El ajuste del modelo siempre será mejor al aumentar el número de clusters, pero ello no significa que el número de clusters sea el apropiado. De hecho, si tenemos que ajustar $n$ puntos, claramente tomar $n$ clusters generaría un ajuste perfecto, pero no permitiría representar si existen realmente agrupaciones de datos.Cuando no se conoce el número de clusters a priori, se utiliza la [regla del codo](https://jarroba.com/seleccion-del-numero-optimo-clusters/), que indica que el número más apropiado es aquel donde "cambia la pendiente" de decrecimiento de la la suma de las distancias a los clusters para cada punto, en función del número de clusters.A continuación se provee el código para el caso de clustering sobre los datos estandarizados, leídos directamente de un archivo preparado especialmente.En la línea que se declara `kmeans` dentro del ciclo _for_ debes definir un estimador K-Means, con `k` clusters y `random_state` 42. Recuerda aprovechar de ajustar el modelo en una sola línea. | elbow = pd.Series(name="inertia", dtype="float64").rename_axis(index="k")
for k in range(1, 10):
kmeans = KMeans(n_clusters=k,random_state=42).fit(X)
elbow.loc[k] = kmeans.inertia_ # Inertia: Sum of distances of samples to their closest cluster center
elbow = elbow.reset_index()
alt.Chart(elbow).mark_line(point=True).encode(
x="k:O",
y="inertia:Q"
).properties(
height=600,
width=800
) | _____no_output_____ | MIT | labs/lab09.ipynb | Amandaa-S/mat281_portfolio |
Residual NetworksWelcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.**In this assignment, you will:**- Implement the basic building blocks of ResNets. - Put together these building blocks to implement and train a state-of-the-art neural network for image classification. Updates If you were working on the notebook before this update...* The current notebook is version "2a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* For testing on an image, replaced `preprocess_input(x)` with `x=x/255.0` to normalize the input image in the same way that the model's training data was normalized.* Refers to "shallower" layers as those layers closer to the input, and "deeper" layers as those closer to the output (Using "shallower" layers instead of "lower" or "earlier").* Added/updated instructions. This assignment will be done in Keras. Before jumping into the problem, let's run the cell below to load the required packages. | import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1) | Using TensorFlow backend.
| MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
1 - The problem of very deep neural networksLast week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.* The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the shallower layers, closer to the input) to very complex features (at the deeper layers, closer to the output). * However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent prohibitively slow. * More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values). * During training, you might therefore see the magnitude (or norm) of the gradient for the shallower layers decrease to zero very rapidly as training proceeds: **Figure 1** : **Vanishing gradient** The speed of learning decreases very rapidly for the shallower layers as the network trains You are now going to solve this problem by building a Residual Network! 2 - Building a Residual NetworkIn ResNets, a "shortcut" or a "skip connection" allows the model to skip layers: **Figure 2** : A ResNet block showing a **skip-connection** The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network. We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function accounts for ResNets' remarkable performance even more so than skip connections helping with vanishing gradients).Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them: the "identity block" and the "convolutional block." 2.1 - The identity blockThe identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps: **Figure 3** : **Identity block.** Skip connection "skips over" 2 layers. The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras! In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this: **Figure 4** : **Identity block.** Skip connection "skips over" 3 layers. Here are the individual steps.First component of main path: - The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization. - The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Second component of main path:- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization. - The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Third component of main path:- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization. - The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. - Note that there is **no** ReLU activation function in this component. Final step: - The `X_shortcut` and the output from the 3rd layer `X` are added together.- **Hint**: The syntax will look something like `Add()([var1,var2])`- Then apply the ReLU activation function. This has no name and no hyperparameters. **Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read this carefully to make sure you understand what it is doing. You should implement the rest. - To implement the Conv2D step: [Conv2D](https://keras.io/layers/convolutional/conv2d)- To implement BatchNorm: [BatchNormalization](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the 'channels' axis))- For the activation, use: `Activation('relu')(X)`- To add the value passed forward by the shortcut: [Add](https://keras.io/layers/merge/add) | # GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0])) | out = [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
| MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
**Expected Output**: **out** [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003] 2.2 - The convolutional blockThe ResNet "convolutional block" is the second block type. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path: **Figure 4** : **Convolutional block** * The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) * For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. * The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step. The details of the convolutional block are as follows. First component of main path:- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the `glorot_uniform` seed.- The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Second component of main path:- The second CONV2D has $F_2$ filters of shape (f,f) and a stride of (1,1). Its padding is "same" and it's name should be `conv_name_base + '2b'`. Use 0 as the `glorot_uniform` seed.- The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Third component of main path:- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be `conv_name_base + '2c'`. Use 0 as the `glorot_uniform` seed.- The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component. Shortcut path:- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '1'`. Use 0 as the `glorot_uniform` seed.- The BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '1'`. Final step: - The shortcut and the main path values are added together.- Then apply the ReLU activation function. This has no name and no hyperparameters. **Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.- [Conv2D](https://keras.io/layers/convolutional/conv2d)- [BatchNormalization](https://keras.io/layers/normalization/batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))- For the activation, use: `Activation('relu')(X)`- [Add](https://keras.io/layers/merge/add) | # GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s=2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(filters=F3, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0])) | out = [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
| MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
**Expected Output**: **out** [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603] 3 - Building your first ResNet model (50 layers)You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together. **Figure 5** : **ResNet-50 model** The details of this ResNet-50 model are:- Zero-padding pads the input with a pad of (3,3)- Stage 1: - The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1". - BatchNorm is applied to the 'channels' axis of the input. - MaxPooling uses a (3,3) window and a (2,2) stride.- Stage 2: - The convolutional block uses three sets of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a". - The 2 identity blocks use three sets of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".- Stage 3: - The convolutional block uses three sets of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a". - The 3 identity blocks use three sets of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".- Stage 4: - The convolutional block uses three sets of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a". - The 5 identity blocks use three sets of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".- Stage 5: - The convolutional block uses three sets of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a". - The 2 identity blocks use three sets of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".- The 'flatten' layer doesn't have any hyperparameters or name.- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above. You'll need to use this function: - Average pooling [see reference](https://keras.io/layers/pooling/averagepooling2d)Here are some other functions we used in the code below:- Conv2D: [See reference](https://keras.io/layers/convolutional/conv2d)- BatchNorm: [See reference](https://keras.io/layers/normalization/batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))- Zero padding: [See reference](https://keras.io/layers/convolutional/zeropadding2d)- Max pooling: [See reference](https://keras.io/layers/pooling/maxpooling2d)- Fully connected layer: [See reference](https://keras.io/layers/core/dense)- Addition: [See reference](https://keras.io/layers/merge/add) | # GRADED FUNCTION: ResNet50
def ResNet50(input_shape=(64, 64, 3), classes=6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides=(2, 2), name='conv1', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f=3, filters=[64, 64, 256], stage=2, block='a', s=1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (≈4 lines)
X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block='a', s=2)
X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')
# Stage 4 (≈6 lines)
X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
# Stage 5 (≈3 lines)
X = X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D(pool_size=(2, 2), padding='same')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer=glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs=X_input, outputs=X, name='ResNet50')
return model | _____no_output_____ | MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below. | model = ResNet50(input_shape = (64, 64, 3), classes = 6) | _____no_output_____ | MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model. | model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) | _____no_output_____ | MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
The model is now ready to be trained. The only thing you need is a dataset. Let's load the SIGNS Dataset. **Figure 6** : **SIGNS dataset** | X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape)) | number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)
| MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch. | model.fit(X_train, Y_train, epochs = 2, batch_size = 32) | Epoch 1/2
1080/1080 [==============================] - 228s - loss: 2.8833 - acc: 0.2546
Epoch 2/2
1080/1080 [==============================] - 226s - loss: 1.9659 - acc: 0.3852
| MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
**Expected Output**: ** Epoch 1/2** loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours. ** Epoch 2/2** loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing. Let's see how this model (trained on only two epochs) performs on the test set. | preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1])) | 120/120 [==============================] - 8s
Loss = 2.19054207802
Test Accuracy = 0.166666666667
| MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
**Expected Output**: **Test Accuracy** between 0.16 and 0.25 For the purpose of this assignment, we've asked you to train the model for just two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well. After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU. Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model. | model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1])) | 120/120 [==============================] - 8s
Loss = 0.530178320408
Test Accuracy = 0.866666662693
| MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system! 4 - Test on your own image (Optional/Ungraded) If you wish, you can also take a picture of your own hand and see the output of the model. To do this: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right! | img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = x/255.0
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x)) | Input image shape: (1, 64, 64, 3)
class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] =
[[ 3.41876671e-06 2.77412561e-04 9.99522924e-01 1.98842812e-07
1.95619068e-04 4.11686671e-07]]
| MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
You can also print a summary of your model by running the following code. | model.summary() | ____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 64, 64, 3) 0
____________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D) (None, 70, 70, 3) 0 input_1[0][0]
____________________________________________________________________________________________________
conv1 (Conv2D) (None, 32, 32, 64) 9472 zero_padding2d_1[0][0]
____________________________________________________________________________________________________
bn_conv1 (BatchNormalization) (None, 32, 32, 64) 256 conv1[0][0]
____________________________________________________________________________________________________
activation_4 (Activation) (None, 32, 32, 64) 0 bn_conv1[0][0]
____________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 15, 15, 64) 0 activation_4[0][0]
____________________________________________________________________________________________________
res2a_branch2a (Conv2D) (None, 15, 15, 64) 4160 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizatio (None, 15, 15, 64) 256 res2a_branch2a[0][0]
____________________________________________________________________________________________________
activation_5 (Activation) (None, 15, 15, 64) 0 bn2a_branch2a[0][0]
____________________________________________________________________________________________________
res2a_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_5[0][0]
____________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizatio (None, 15, 15, 64) 256 res2a_branch2b[0][0]
____________________________________________________________________________________________________
activation_6 (Activation) (None, 15, 15, 64) 0 bn2a_branch2b[0][0]
____________________________________________________________________________________________________
res2a_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_6[0][0]
____________________________________________________________________________________________________
res2a_branch1 (Conv2D) (None, 15, 15, 256) 16640 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
bn2a_branch2c (BatchNormalizatio (None, 15, 15, 256) 1024 res2a_branch2c[0][0]
____________________________________________________________________________________________________
bn2a_branch1 (BatchNormalization (None, 15, 15, 256) 1024 res2a_branch1[0][0]
____________________________________________________________________________________________________
add_2 (Add) (None, 15, 15, 256) 0 bn2a_branch2c[0][0]
bn2a_branch1[0][0]
____________________________________________________________________________________________________
activation_7 (Activation) (None, 15, 15, 256) 0 add_2[0][0]
____________________________________________________________________________________________________
res2b_branch2a (Conv2D) (None, 15, 15, 64) 16448 activation_7[0][0]
____________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizatio (None, 15, 15, 64) 256 res2b_branch2a[0][0]
____________________________________________________________________________________________________
activation_8 (Activation) (None, 15, 15, 64) 0 bn2b_branch2a[0][0]
____________________________________________________________________________________________________
res2b_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_8[0][0]
____________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizatio (None, 15, 15, 64) 256 res2b_branch2b[0][0]
____________________________________________________________________________________________________
activation_9 (Activation) (None, 15, 15, 64) 0 bn2b_branch2b[0][0]
____________________________________________________________________________________________________
res2b_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_9[0][0]
____________________________________________________________________________________________________
bn2b_branch2c (BatchNormalizatio (None, 15, 15, 256) 1024 res2b_branch2c[0][0]
____________________________________________________________________________________________________
add_3 (Add) (None, 15, 15, 256) 0 bn2b_branch2c[0][0]
activation_7[0][0]
____________________________________________________________________________________________________
activation_10 (Activation) (None, 15, 15, 256) 0 add_3[0][0]
____________________________________________________________________________________________________
res2c_branch2a (Conv2D) (None, 15, 15, 64) 16448 activation_10[0][0]
____________________________________________________________________________________________________
bn2c_branch2a (BatchNormalizatio (None, 15, 15, 64) 256 res2c_branch2a[0][0]
____________________________________________________________________________________________________
activation_11 (Activation) (None, 15, 15, 64) 0 bn2c_branch2a[0][0]
____________________________________________________________________________________________________
res2c_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_11[0][0]
____________________________________________________________________________________________________
bn2c_branch2b (BatchNormalizatio (None, 15, 15, 64) 256 res2c_branch2b[0][0]
____________________________________________________________________________________________________
activation_12 (Activation) (None, 15, 15, 64) 0 bn2c_branch2b[0][0]
____________________________________________________________________________________________________
res2c_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_12[0][0]
____________________________________________________________________________________________________
bn2c_branch2c (BatchNormalizatio (None, 15, 15, 256) 1024 res2c_branch2c[0][0]
____________________________________________________________________________________________________
add_4 (Add) (None, 15, 15, 256) 0 bn2c_branch2c[0][0]
activation_10[0][0]
____________________________________________________________________________________________________
activation_13 (Activation) (None, 15, 15, 256) 0 add_4[0][0]
____________________________________________________________________________________________________
res3a_branch2a (Conv2D) (None, 8, 8, 128) 32896 activation_13[0][0]
____________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3a_branch2a[0][0]
____________________________________________________________________________________________________
activation_14 (Activation) (None, 8, 8, 128) 0 bn3a_branch2a[0][0]
____________________________________________________________________________________________________
res3a_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_14[0][0]
____________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3a_branch2b[0][0]
____________________________________________________________________________________________________
activation_15 (Activation) (None, 8, 8, 128) 0 bn3a_branch2b[0][0]
____________________________________________________________________________________________________
res3a_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_15[0][0]
____________________________________________________________________________________________________
res3a_branch1 (Conv2D) (None, 8, 8, 512) 131584 activation_13[0][0]
____________________________________________________________________________________________________
bn3a_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3a_branch2c[0][0]
____________________________________________________________________________________________________
bn3a_branch1 (BatchNormalization (None, 8, 8, 512) 2048 res3a_branch1[0][0]
____________________________________________________________________________________________________
add_5 (Add) (None, 8, 8, 512) 0 bn3a_branch2c[0][0]
bn3a_branch1[0][0]
____________________________________________________________________________________________________
activation_16 (Activation) (None, 8, 8, 512) 0 add_5[0][0]
____________________________________________________________________________________________________
res3b_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_16[0][0]
____________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3b_branch2a[0][0]
____________________________________________________________________________________________________
activation_17 (Activation) (None, 8, 8, 128) 0 bn3b_branch2a[0][0]
____________________________________________________________________________________________________
res3b_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_17[0][0]
____________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3b_branch2b[0][0]
____________________________________________________________________________________________________
activation_18 (Activation) (None, 8, 8, 128) 0 bn3b_branch2b[0][0]
____________________________________________________________________________________________________
res3b_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_18[0][0]
____________________________________________________________________________________________________
bn3b_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3b_branch2c[0][0]
____________________________________________________________________________________________________
add_6 (Add) (None, 8, 8, 512) 0 bn3b_branch2c[0][0]
activation_16[0][0]
____________________________________________________________________________________________________
activation_19 (Activation) (None, 8, 8, 512) 0 add_6[0][0]
____________________________________________________________________________________________________
res3c_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_19[0][0]
____________________________________________________________________________________________________
bn3c_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3c_branch2a[0][0]
____________________________________________________________________________________________________
activation_20 (Activation) (None, 8, 8, 128) 0 bn3c_branch2a[0][0]
____________________________________________________________________________________________________
res3c_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_20[0][0]
____________________________________________________________________________________________________
bn3c_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3c_branch2b[0][0]
____________________________________________________________________________________________________
activation_21 (Activation) (None, 8, 8, 128) 0 bn3c_branch2b[0][0]
____________________________________________________________________________________________________
res3c_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_21[0][0]
____________________________________________________________________________________________________
bn3c_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3c_branch2c[0][0]
____________________________________________________________________________________________________
add_7 (Add) (None, 8, 8, 512) 0 bn3c_branch2c[0][0]
activation_19[0][0]
____________________________________________________________________________________________________
activation_22 (Activation) (None, 8, 8, 512) 0 add_7[0][0]
____________________________________________________________________________________________________
res3d_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_22[0][0]
____________________________________________________________________________________________________
bn3d_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3d_branch2a[0][0]
____________________________________________________________________________________________________
activation_23 (Activation) (None, 8, 8, 128) 0 bn3d_branch2a[0][0]
____________________________________________________________________________________________________
res3d_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_23[0][0]
____________________________________________________________________________________________________
bn3d_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3d_branch2b[0][0]
____________________________________________________________________________________________________
activation_24 (Activation) (None, 8, 8, 128) 0 bn3d_branch2b[0][0]
____________________________________________________________________________________________________
res3d_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_24[0][0]
____________________________________________________________________________________________________
bn3d_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3d_branch2c[0][0]
____________________________________________________________________________________________________
add_8 (Add) (None, 8, 8, 512) 0 bn3d_branch2c[0][0]
activation_22[0][0]
____________________________________________________________________________________________________
activation_25 (Activation) (None, 8, 8, 512) 0 add_8[0][0]
____________________________________________________________________________________________________
res4a_branch2a (Conv2D) (None, 4, 4, 256) 131328 activation_25[0][0]
____________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4a_branch2a[0][0]
____________________________________________________________________________________________________
activation_26 (Activation) (None, 4, 4, 256) 0 bn4a_branch2a[0][0]
____________________________________________________________________________________________________
res4a_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_26[0][0]
____________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4a_branch2b[0][0]
____________________________________________________________________________________________________
activation_27 (Activation) (None, 4, 4, 256) 0 bn4a_branch2b[0][0]
____________________________________________________________________________________________________
res4a_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_27[0][0]
____________________________________________________________________________________________________
res4a_branch1 (Conv2D) (None, 4, 4, 1024) 525312 activation_25[0][0]
____________________________________________________________________________________________________
bn4a_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4a_branch2c[0][0]
____________________________________________________________________________________________________
bn4a_branch1 (BatchNormalization (None, 4, 4, 1024) 4096 res4a_branch1[0][0]
____________________________________________________________________________________________________
add_9 (Add) (None, 4, 4, 1024) 0 bn4a_branch2c[0][0]
bn4a_branch1[0][0]
____________________________________________________________________________________________________
activation_28 (Activation) (None, 4, 4, 1024) 0 add_9[0][0]
____________________________________________________________________________________________________
res4b_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_28[0][0]
____________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4b_branch2a[0][0]
____________________________________________________________________________________________________
activation_29 (Activation) (None, 4, 4, 256) 0 bn4b_branch2a[0][0]
____________________________________________________________________________________________________
res4b_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_29[0][0]
____________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4b_branch2b[0][0]
____________________________________________________________________________________________________
activation_30 (Activation) (None, 4, 4, 256) 0 bn4b_branch2b[0][0]
____________________________________________________________________________________________________
res4b_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_30[0][0]
____________________________________________________________________________________________________
bn4b_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4b_branch2c[0][0]
____________________________________________________________________________________________________
add_10 (Add) (None, 4, 4, 1024) 0 bn4b_branch2c[0][0]
activation_28[0][0]
____________________________________________________________________________________________________
activation_31 (Activation) (None, 4, 4, 1024) 0 add_10[0][0]
____________________________________________________________________________________________________
res4c_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_31[0][0]
____________________________________________________________________________________________________
bn4c_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4c_branch2a[0][0]
____________________________________________________________________________________________________
activation_32 (Activation) (None, 4, 4, 256) 0 bn4c_branch2a[0][0]
____________________________________________________________________________________________________
res4c_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_32[0][0]
____________________________________________________________________________________________________
bn4c_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4c_branch2b[0][0]
____________________________________________________________________________________________________
activation_33 (Activation) (None, 4, 4, 256) 0 bn4c_branch2b[0][0]
____________________________________________________________________________________________________
res4c_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_33[0][0]
____________________________________________________________________________________________________
bn4c_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4c_branch2c[0][0]
____________________________________________________________________________________________________
add_11 (Add) (None, 4, 4, 1024) 0 bn4c_branch2c[0][0]
activation_31[0][0]
____________________________________________________________________________________________________
activation_34 (Activation) (None, 4, 4, 1024) 0 add_11[0][0]
____________________________________________________________________________________________________
res4d_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_34[0][0]
____________________________________________________________________________________________________
bn4d_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4d_branch2a[0][0]
____________________________________________________________________________________________________
activation_35 (Activation) (None, 4, 4, 256) 0 bn4d_branch2a[0][0]
____________________________________________________________________________________________________
res4d_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_35[0][0]
____________________________________________________________________________________________________
bn4d_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4d_branch2b[0][0]
____________________________________________________________________________________________________
activation_36 (Activation) (None, 4, 4, 256) 0 bn4d_branch2b[0][0]
____________________________________________________________________________________________________
res4d_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_36[0][0]
____________________________________________________________________________________________________
bn4d_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4d_branch2c[0][0]
____________________________________________________________________________________________________
add_12 (Add) (None, 4, 4, 1024) 0 bn4d_branch2c[0][0]
activation_34[0][0]
____________________________________________________________________________________________________
activation_37 (Activation) (None, 4, 4, 1024) 0 add_12[0][0]
____________________________________________________________________________________________________
res4e_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_37[0][0]
____________________________________________________________________________________________________
bn4e_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4e_branch2a[0][0]
____________________________________________________________________________________________________
activation_38 (Activation) (None, 4, 4, 256) 0 bn4e_branch2a[0][0]
____________________________________________________________________________________________________
res4e_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_38[0][0]
____________________________________________________________________________________________________
bn4e_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4e_branch2b[0][0]
____________________________________________________________________________________________________
activation_39 (Activation) (None, 4, 4, 256) 0 bn4e_branch2b[0][0]
____________________________________________________________________________________________________
res4e_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_39[0][0]
____________________________________________________________________________________________________
bn4e_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4e_branch2c[0][0]
____________________________________________________________________________________________________
add_13 (Add) (None, 4, 4, 1024) 0 bn4e_branch2c[0][0]
activation_37[0][0]
____________________________________________________________________________________________________
activation_40 (Activation) (None, 4, 4, 1024) 0 add_13[0][0]
____________________________________________________________________________________________________
res4f_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_40[0][0]
____________________________________________________________________________________________________
bn4f_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4f_branch2a[0][0]
____________________________________________________________________________________________________
activation_41 (Activation) (None, 4, 4, 256) 0 bn4f_branch2a[0][0]
____________________________________________________________________________________________________
res4f_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_41[0][0]
____________________________________________________________________________________________________
bn4f_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4f_branch2b[0][0]
____________________________________________________________________________________________________
activation_42 (Activation) (None, 4, 4, 256) 0 bn4f_branch2b[0][0]
____________________________________________________________________________________________________
res4f_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_42[0][0]
____________________________________________________________________________________________________
bn4f_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4f_branch2c[0][0]
____________________________________________________________________________________________________
add_14 (Add) (None, 4, 4, 1024) 0 bn4f_branch2c[0][0]
activation_40[0][0]
____________________________________________________________________________________________________
activation_43 (Activation) (None, 4, 4, 1024) 0 add_14[0][0]
____________________________________________________________________________________________________
res5a_branch2a (Conv2D) (None, 2, 2, 512) 524800 activation_43[0][0]
____________________________________________________________________________________________________
bn5a_branch2a (BatchNormalizatio (None, 2, 2, 512) 2048 res5a_branch2a[0][0]
____________________________________________________________________________________________________
activation_44 (Activation) (None, 2, 2, 512) 0 bn5a_branch2a[0][0]
____________________________________________________________________________________________________
res5a_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_44[0][0]
____________________________________________________________________________________________________
bn5a_branch2b (BatchNormalizatio (None, 2, 2, 512) 2048 res5a_branch2b[0][0]
____________________________________________________________________________________________________
activation_45 (Activation) (None, 2, 2, 512) 0 bn5a_branch2b[0][0]
____________________________________________________________________________________________________
res5a_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_45[0][0]
____________________________________________________________________________________________________
res5a_branch1 (Conv2D) (None, 2, 2, 2048) 2099200 activation_43[0][0]
____________________________________________________________________________________________________
bn5a_branch2c (BatchNormalizatio (None, 2, 2, 2048) 8192 res5a_branch2c[0][0]
____________________________________________________________________________________________________
bn5a_branch1 (BatchNormalization (None, 2, 2, 2048) 8192 res5a_branch1[0][0]
____________________________________________________________________________________________________
add_15 (Add) (None, 2, 2, 2048) 0 bn5a_branch2c[0][0]
bn5a_branch1[0][0]
____________________________________________________________________________________________________
activation_46 (Activation) (None, 2, 2, 2048) 0 add_15[0][0]
____________________________________________________________________________________________________
res5b_branch2a (Conv2D) (None, 2, 2, 512) 1049088 activation_46[0][0]
____________________________________________________________________________________________________
bn5b_branch2a (BatchNormalizatio (None, 2, 2, 512) 2048 res5b_branch2a[0][0]
____________________________________________________________________________________________________
activation_47 (Activation) (None, 2, 2, 512) 0 bn5b_branch2a[0][0]
____________________________________________________________________________________________________
res5b_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_47[0][0]
____________________________________________________________________________________________________
bn5b_branch2b (BatchNormalizatio (None, 2, 2, 512) 2048 res5b_branch2b[0][0]
____________________________________________________________________________________________________
activation_48 (Activation) (None, 2, 2, 512) 0 bn5b_branch2b[0][0]
____________________________________________________________________________________________________
res5b_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_48[0][0]
____________________________________________________________________________________________________
bn5b_branch2c (BatchNormalizatio (None, 2, 2, 2048) 8192 res5b_branch2c[0][0]
____________________________________________________________________________________________________
add_16 (Add) (None, 2, 2, 2048) 0 bn5b_branch2c[0][0]
activation_46[0][0]
____________________________________________________________________________________________________
activation_49 (Activation) (None, 2, 2, 2048) 0 add_16[0][0]
____________________________________________________________________________________________________
res5c_branch2a (Conv2D) (None, 2, 2, 512) 1049088 activation_49[0][0]
____________________________________________________________________________________________________
bn5c_branch2a (BatchNormalizatio (None, 2, 2, 512) 2048 res5c_branch2a[0][0]
____________________________________________________________________________________________________
activation_50 (Activation) (None, 2, 2, 512) 0 bn5c_branch2a[0][0]
____________________________________________________________________________________________________
res5c_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_50[0][0]
____________________________________________________________________________________________________
bn5c_branch2b (BatchNormalizatio (None, 2, 2, 512) 2048 res5c_branch2b[0][0]
____________________________________________________________________________________________________
activation_51 (Activation) (None, 2, 2, 512) 0 bn5c_branch2b[0][0]
____________________________________________________________________________________________________
res5c_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_51[0][0]
____________________________________________________________________________________________________
bn5c_branch2c (BatchNormalizatio (None, 2, 2, 2048) 8192 res5c_branch2c[0][0]
____________________________________________________________________________________________________
add_17 (Add) (None, 2, 2, 2048) 0 bn5c_branch2c[0][0]
activation_49[0][0]
____________________________________________________________________________________________________
activation_52 (Activation) (None, 2, 2, 2048) 0 add_17[0][0]
____________________________________________________________________________________________________
avg_pool (AveragePooling2D) (None, 1, 1, 2048) 0 activation_52[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 2048) 0 avg_pool[0][0]
____________________________________________________________________________________________________
fc6 (Dense) (None, 6) 12294 flatten_1[0][0]
====================================================================================================
Total params: 23,600,006
Trainable params: 23,546,886
Non-trainable params: 53,120
____________________________________________________________________________________________________
| MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png". | plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg')) | _____no_output_____ | MIT | Course 4 - Convolutional Neural Networks/NoteBooks/Residual_Networks_v2a.ipynb | HarshitRuwali/Coursera-Deep-Learning-Specialization |
Inferring parameters of SDEs using a Euler-Maruyama scheme_This notebook is derived from a presentation prepared for the Theoretical Neuroscience Group, Institute of Systems Neuroscience at Aix-Marseile University._ | %pylab inline
import arviz as az
import pymc3 as pm
import scipy
import theano.tensor as tt
from pymc3.distributions.timeseries import EulerMaruyama
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid') | _____no_output_____ | Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
Toy model 1Here's a scalar linear SDE in symbolic form$ dX_t = \lambda X_t + \sigma^2 dW_t $discretized with the Euler-Maruyama scheme | # parameters
λ = -0.78
σ2 = 5e-3
N = 200
dt = 1e-1
# time series
x = 0.1
x_t = []
# simulate
for i in range(N):
x += dt * λ * x + sqrt(dt) * σ2 * randn()
x_t.append(x)
x_t = array(x_t)
# z_t noisy observation
z_t = x_t + randn(x_t.size) * 5e-3
figure(figsize=(10, 3))
subplot(121)
plot(x_t[:30], 'k', label='$x(t)$', alpha=0.5), plot(z_t[:30], 'r', label='$z(t)$', alpha=0.5)
title('Transient'), legend()
subplot(122)
plot(x_t[30:], 'k', label='$x(t)$', alpha=0.5), plot(z_t[30:], 'r', label='$z(t)$', alpha=0.5)
title('All time');
tight_layout() | _____no_output_____ | Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
What is the inference we want to make? Since we've made a noisy observation of the generated time series, we need to estimate both $x(t)$ and $\lambda$. First, we rewrite our SDE as a function returning a tuple of the drift and diffusion coefficients | def lin_sde(x, lam):
return lam * x, σ2 | _____no_output_____ | Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
Next, we describe the probability model as a set of three stochastic variables, `lam`, `xh`, and `zh`: | with pm.Model() as model:
# uniform prior, but we know it must be negative
lam = pm.Flat('lam')
# "hidden states" following a linear SDE distribution
# parametrized by time step (det. variable) and lam (random variable)
xh = EulerMaruyama('xh', dt, lin_sde, (lam, ), shape=N, testval=x_t)
# predicted observation
zh = pm.Normal('zh', mu=xh, sigma=5e-3, observed=z_t) | _____no_output_____ | Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
Once the model is constructed, we perform inference, i.e. sample from the posterior distribution, in the following steps: | with model:
trace = pm.sample(2000, tune=1000) | Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [xh, lam]
| Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
Next, we plot some basic statistics on the samples from the posterior, | figure(figsize=(10, 3))
subplot(121)
plot(percentile(trace[xh], [2.5, 97.5], axis=0).T, 'k', label=r'$\hat{x}_{95\%}(t)$')
plot(x_t, 'r', label='$x(t)$')
legend()
subplot(122)
hist(trace[lam], 30, label=r'$\hat{\lambda}$', alpha=0.5)
axvline(λ, color='r', label=r'$\lambda$', alpha=0.5)
legend(); | _____no_output_____ | Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
A model can fit the data precisely and still be wrong; we need to use _posterior predictive checks_ to assess if, under our fit model, the data our likely.In other words, we - assume the model is correct- simulate new observations- check that the new observations fit with the original data | # generate trace from posterior
ppc_trace = pm.sample_posterior_predictive(trace, model=model)
# plot with data
figure(figsize=(10, 3))
plot(percentile(ppc_trace['zh'], [2.5, 97.5], axis=0).T, 'k', label=r'$z_{95\% PP}(t)$')
plot(z_t, 'r', label='$z(t)$')
legend() | _____no_output_____ | Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
Note that - inference also estimates the initial conditions- the observed data $z(t)$ lies fully within the 95% interval of the PPC.- there are many other ways of evaluating fit Toy model 2As the next model, let's use a 2D deterministic oscillator, \begin{align}\dot{x} &= \tau (x - x^3/3 + y) \\\dot{y} &= \frac{1}{\tau} (a - x)\end{align}with noisy observation $z(t) = m x + (1 - m) y + N(0, 0.05)$. | N, τ, a, m, σ2 = 200, 3.0, 1.05, 0.2, 1e-1
xs, ys = [0.0], [1.0]
for i in range(N):
x, y = xs[-1], ys[-1]
dx = τ * (x - x**3.0/3.0 + y)
dy = (1.0 / τ) * (a - x)
xs.append(x + dt * dx + sqrt(dt) * σ2 * randn())
ys.append(y + dt * dy + sqrt(dt) * σ2 * randn())
xs, ys = array(xs), array(ys)
zs = m * xs + (1 - m) * ys + randn(xs.size) * 0.1
figure(figsize=(10, 2))
plot(xs, label='$x(t)$')
plot(ys, label='$y(t)$')
plot(zs, label='$z(t)$')
legend() | _____no_output_____ | Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
Now, estimate the hidden states $x(t)$ and $y(t)$, as well as parameters $\tau$, $a$ and $m$.As before, we rewrite our SDE as a function returned drift & diffusion coefficients: | def osc_sde(xy, τ, a):
x, y = xy[:, 0], xy[:, 1]
dx = τ * (x - x**3.0/3.0 + y)
dy = (1.0 / τ) * (a - x)
dxy = tt.stack([dx, dy], axis=0).T
return dxy, σ2 | _____no_output_____ | Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
As before, the Euler-Maruyama discretization of the SDE is written as a prediction of the state at step $i+1$ based on the state at step $i$. We can now write our statistical model as before, with uninformative priors on $\tau$, $a$ and $m$: | xys = c_[xs, ys]
with pm.Model() as model:
τh = pm.Uniform('τh', lower=0.1, upper=5.0)
ah = pm.Uniform('ah', lower=0.5, upper=1.5)
mh = pm.Uniform('mh', lower=0.0, upper=1.0)
xyh = EulerMaruyama('xyh', dt, osc_sde, (τh, ah), shape=xys.shape, testval=xys)
zh = pm.Normal('zh', mu=mh * xyh[:, 0] + (1 - mh) * xyh[:, 1], sigma=0.1, observed=zs)
with model:
trace = pm.sample(2000, tune=1000) | Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [xyh, mh, ah, τh]
| Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
Again, the result is a set of samples from the posterior, including our parameters of interest but also the hidden states | figure(figsize=(10, 6))
subplot(211)
plot(percentile(trace[xyh][..., 0], [2.5, 97.5], axis=0).T, 'k', label=r'$\hat{x}_{95\%}(t)$')
plot(xs, 'r', label='$x(t)$')
legend(loc=0)
subplot(234), hist(trace['τh']), axvline(τ), xlim([1.0, 4.0]), title('τ')
subplot(235), hist(trace['ah']), axvline(a), xlim([0, 2.0]), title('a')
subplot(236), hist(trace['mh']), axvline(m), xlim([0, 1]), title('m')
tight_layout() | _____no_output_____ | Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
Again, we can perform a posterior predictive check, that our data are likely given the fit model | # generate trace from posterior
ppc_trace = pm.sample_posterior_predictive(trace, model=model)
# plot with data
figure(figsize=(10, 3))
plot(percentile(ppc_trace['zh'], [2.5, 97.5], axis=0).T, 'k', label=r'$z_{95\% PP}(t)$')
plot(zs, 'r', label='$z(t)$')
legend()
%load_ext watermark
%watermark -n -u -v -iv -w | scipy 1.4.1
logging 0.5.1.2
matplotlib.pylab 1.18.5
re 2.2.1
pymc3 3.9.0
matplotlib 3.2.1
numpy 1.18.5
arviz 0.8.3
last updated: Mon Jun 15 2020
CPython 3.7.7
IPython 7.15.0
watermark 2.0.2
| Apache-2.0 | docs/source/notebooks/Euler-Maruyama_and_SDEs.ipynb | satrio-hw/pymc3 |
Introdução a linguagem Python (parte 1)Notebook para o curso de IoT - IFSP PiracicabaGustavo Voltani von AtzingenPython - versão 2.7Este notebook contém uma introdução aos comandos básicos em python.Serão cobertos os seguintes tópicos* Print* Comentários* Atribuição de variáveis e tipos* Trabalhando com strings (parte 1)* Listas (parte 1)* Estruturas de controle (if-elif-else, for, while)* Funções* Utilizando módulos (import) Print Para se "imprimir" algum texto na tela, o python (versão 2.7) possui uma palavra reservada, chamada print. | print "Hello Python 2.7 !" | Hello Python 2.7 !
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Também podemos imprimir várias strings ou numeros, separando-os por ',' | print 'Parte 1 - ', ' A resposta é: ', 42 | Parte 1 - A resposta é: 42
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Podemos inserir variáveis (núméricas) no meio do texto utilizando o método .format | print 'O valor da leitura dos sensores são {}Votls e {}Volts'.format(4.2, 1.68) | O valor da leitura dos sensores são 4.2Votls e 1.68Volts
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
ComentáriosComentários são inseridos no programa utilizando o caracter '' e com isso toda linha é ignorada pelo interpretador | # isto é uma linha de comentário | _____no_output_____ | MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Para fazer um bloco de comentário (várias linhas), utiliza-se ''' no início e ''' no final do bloco de comentário | ''' Isto e um bloco de comentarios
Todas as linhas neste bloco sao ignoradas
pelo interpretador
''' | _____no_output_____ | MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Atribuição de variáveis Em python as variáveis não são explicitamente declaradas. O interpretador faz a atribuição em tempo de execução. Os tipis de estrutura utilizadas pelo interpretador são:* Números (number) - Inteiro ou real* Strings (string)* Listas (list)* Tuplas (tuple)* Dicionários (dictionary) | a = 42 # A variável á recebe um número
b = 1.68 # Variável real
c = 'texto' # Texto
print a, b, c | 42 1.68 texto
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
As variáveis podem alterar o seu tipo durante a execução (runtime) | a = 1.3
print 'valor de a antes: ', a
a = 'texto'
print 'valor de a depois: ', a | valor de a antes: 1.3
valor de a depois: texto
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
As variáveis podem sem atribuidas simultaneamente. Isto pode ser feito para simplificar o código e evitar a criação de variáveis temporárias | a, b = 1, 1
print a, b
a, b = b, a + b
print a, b | 1 1
1 2
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Strings strings podem ser criadas utilizando ' ou " (aspas simples ou dupla) | nome = 'Gustavo' # Isto é uma string
nome = "Joao" # Isto também é uma string
letra = 'a' # Strings também podem ter um único caracter | _____no_output_____ | MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Podemos utilizar a indexação para acessar elementos da string ou partes dela | nome = 'Gustavo Voltani von Atzingen'
print nome[0], nome[1], nome[8] # A indexação começa em zero e segue até o ultimo valor
nome = 'Gustavo Voltani von Atzingen'
print nome[-1], nome[-2] # Também existe a indexação do fim para o início com
#números negativos iniciando em 1
nome = 'Gustavo Voltani von Atzingen'
print nome[8:15] # Podemos pegar parte da string desta forma
print nome[20:] # Da osição 20 até o final
print nome[:7] # Do início até a posição 6 | Voltani
Atzingen
Gustavo
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Existes vários métodos que podem ser aplicados na string. O método split divide a string em um caracter especificado. Outros métidis serão abordados em aulas posteriores. | nome = 'Gustavo Voltani von Atzingen'
print nome.split(' ') # separando o nome pelo espaço em branco | ['Gustavo', 'Voltani', 'von', 'Atzingen']
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Listas Listas são sequencias ordenadas de objetos (que podem ser strings, numeros, listas ou outros) | lista = ['texto1', 'texto2', 'texto3', 'texto4']
print lista
# também podemos ter vários tipos na mesma lista
lista = [42, 'texto2', 1.68, 'texto4']
print lista
# também podemos ter uma lista dentro de outra
lista = [ [42, 54, 1.7], 'texto2', 1.68, 'texto4']
print lista
# A lista também é indexada e pode ser buscada da mesma forma que
# foi feito com as strings
lista = [42, 34, 78, 1, 91, 1, 34]
print lista[0], lista[-1], lista[2:5] | 42 34 [78, 1, 91]
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Estruturas de controle: if | a = 4
if a < 1:
print 'a é menor que 1'
elif a < 3:
print 'a é menor que 3 e maior ou igual 1'
elif a < 5:
print 'a é menor que 5 e maior ou igual 3'
else:
print 'a é maior= 5' | a é menor que 5 e maior ou igual 3
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Estruturas de controle: for for é uma estrutura de controle que vai iterar sobre uma lista ou uma string | nome = 'gustavo'
for letra in nome:
print letra
lista = ['texto1', 'texto2', 'texto3', 'texto4']
for item in lista:
print item
# Se quisermos fazer uma repetição com contagem numérica, podemos
# utilizar a função range() ou outras que serão mostradas futuramente
# Mostra os números de 0 a 9
for i in range(10):
print i
# se quisermos contar os elementos de uma lista podemos usar a função enumerate
lista = ['texto1', 'texto2', 'texto3', 'texto4']
for indice, item in enumerate(lista):
print indice, item | 0 texto1
1 texto2
2 texto3
3 texto4
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Estruturas de while: for Repete até que a condição seja falsa | contador = 0
while contador < 5:
print contador
contador += 1 | 0
1
2
3
4
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Funções Funções são escritas com a palavra def e o nome da função, juntamente com os argumentos.Pode retorar (ou não) um ou mais ojbetos. | def somador(a, b):
return a + a
somador(1, 2)
def separa_por_espao(texto):
if ' ' in texto:
return texto.split(' ')
else:
return None
nome1, nome2 = separa_por_espao('nome1 nome2')
print nome1, nome2
# funções podem ter argumentos chave
def soma(a, b=1):
return a + b
print soma(1,2)
print soma(1) | 3
2
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
Módulos e importação | import datetime
tempo_atual = datetime.datetime.now()
print tempo_atual.hour, tempo_atual.minute, tempo_atual.second
from datetime import datetime as d
tempo_atual = d.now()
print tempo_atual.hour, tempo_atual.minute, tempo_atual.second | 16 26 42
| MIT | aula-03-python/.ipynb_checkpoints/Introducao-Python-01-checkpoint.ipynb | Atzingen/curso-IoT-2017 |
计算机视觉作为一门让机器学会如何去“看”的学科,具体的说,就是让机器去识别摄像机拍摄的图片或视频中的物体,检测出物体所在的位置,并对目标物体进行跟踪,从而理解并描述出图片或视频里的场景和故事,以此来模拟人脑视觉系统。因此,计算机视觉也通常被叫做机器视觉,其目的是建立能够从图像或者视频中“感知”信息的人工系统。计算机视觉技术经过几十年的发展,已经在交通(车牌识别、道路违章抓拍)、安防(人脸闸机、小区监控)、金融(刷脸支付、柜台的自动票据识别)、医疗(医疗影像诊断)、工业生产(产品缺陷自动检测)等多个领域应用,影响或正在改变人们的日常生活和工业生产方式。未来,随着技术的不断演进,必将涌现出更多的产品和应用,为我们的生活创造更大的便利和更广阔的机会。图1:计算机视觉技术在各领域的应用飞桨为计算机视觉任务提供了丰富的API,并通过底层优化和加速保证了这些API的性能。同时,飞桨还提供了丰富的模型库,覆盖图像分类、检测、分割、文字识别和视频理解等多个领域。用户可以直接使用这些API组建模型,也可以在飞桨提供的模型库基础上进行二次研发。由于篇幅所限,本章将重点介绍计算机视觉的经典模型(卷积神经网络)和两个典型任务(图像分类和目标检测)。主要涵盖如下内容:**卷积神经网络**:卷积神经网络(Convolutional Neural Networks, CNN)是计算机视觉技术最经典的模型结构。本教程主要介绍卷积神经网络的常用模块,包括:卷积、池化、批归一化、丢弃法等。- **图像分类**:介绍图像分类算法的经典模型结构,包括:LeNet、AlexNet、VGG、GoogLeNet、ResNet,并通过眼疾筛查的案例展示算法的应用。- **目标检测**:介绍目标检测YOLOv3算法,并通过林业病虫害检测案例展示YOLOv3算法的应用。 计算机视觉的发展历程计算机视觉的发展历程要从生物视觉讲起。对于生物视觉的起源,目前学术界尚没有形成定论。有研究者认为最早的生物视觉形成于距今约[7亿年前的水母之中](https://www.pnas.org/content/109/46/18868),也有研究者认为生物视觉产生于距今约5亿年前寒武纪【[1](https://doi.org/10.1038%2Fnature10097), [2](https://en.wikipedia.org/wiki/Evolution_of_the_eye)】。寒武纪生物大爆发的原因一直是个未解之谜,不过可以肯定的是在寒武纪动物具有了视觉能力,捕食者可以更容易地发现猎物,被捕食者也可以更早的发现天敌的位置。视觉能力加剧了猎手和猎物之间的博弈,也催生出更加激烈的生存演化规则。视觉系统的形成有力地推动了食物链的演化,加速了生物进化过程,是生物发展史上重要的里程碑。经过几亿年的演化,目前人类的视觉系统已经具备非常高的复杂度和强大的功能,人脑中神经元数目达到了1000亿个,这些神经元通过网络互相连接,这样庞大的视觉神经网络使得我们可以很轻松的观察周围的世界,如 **图2** 所示。图2:人类视觉感知对人类来说,识别猫和狗是件非常容易的事。但对计算机来说,即使是一个精通编程的高手,也很难轻松写出具有通用性的程序(比如:假设程序认为体型大的是狗,体型小的是猫,但由于拍摄角度不同,可能一张图片上猫占据的像素比狗还多)。那么,如何让计算机也能像人一样看懂周围的世界呢?研究者尝试着从不同的角度去解决这个问题,由此也发展出一系列的子任务,如 **图3** 所示。图3:计算机视觉子任务示意图- **(a) Image Classification:** 图像分类,用于识别图像中物体的类别(如:bottle、cup、cube)。- **(b) Object Localization:** 目标检测,用于检测图像中每个物体的类别,并准确标出它们的位置。- **(c) Semantic Segmentation:** 图像语义分割,用于标出图像中每个像素点所属的类别,属于同一类别的像素点用一个颜色标识。- **(d) Instance Segmentation:** 实例分割,值得注意的是,(b)中的目标检测任务只需要标注出物体位置,而(d)中的实例分割任务不仅要标注出物体位置,还需要标注出物体的外形轮廓。在早期的图像分类任务中,通常是先人工提取图像特征,再用机器学习算法对这些特征进行分类,分类的结果强依赖于特征提取方法,往往只有经验丰富的研究者才能完成,如 **图4** 所示。图4:早期的图像分类任务在这种背景下,基于神经网络的特征提取方法应运而生。Yann LeCun是最早将卷积神经网络应用到图像识别领域的,其主要逻辑是使用卷积神经网络提取图像特征,并对图像所属类别进行预测,通过训练数据不断调整网络参数,最终形成一套能自动提取图像特征并对这些特征进行分类的网络,如 **图5** 所示。图5:早期的卷积神经网络处理图像任务示意这一方法在手写数字识别任务上取得了极大的成功,但在接下来的时间里,却没有得到很好的发展。其主要原因一方面是数据集不完善,只能处理简单任务,在大尺寸的数据上容易发生过拟合;另一方面是硬件瓶颈,网络模型复杂时,计算速度会特别慢。目前,随着互联网技术的不断进步,数据量呈现大规模的增长,越来越丰富的数据集不断涌现。另外,得益于硬件能力的提升,计算机的算力也越来越强大。不断有研究者将新的模型和算法应用到计算机视觉领域。由此催生了越来越丰富的模型结构和更加准确的精度,同时计算机视觉所处理的问题也越来越丰富,包括分类、检测、分割、场景描述、图像生成和风格变换等,甚至还不仅仅局限于2维图片,包括视频处理技术和3D视觉等。 卷积神经网络卷积神经网络是目前计算机视觉中使用最普遍的模型结构。本章节主要向读者介绍卷积神经网络的一些基础模块,包括: - 卷积(Convolution) - 池化(Pooling) - 批归一化(Batch Normalization) - 丢弃法(Dropout) 回顾一下,在上一章“一个案例带你吃透深度学习”中,我们介绍了手写数字识别任务,应用的是全连接网络进行特征提取,即将一张图片上的所有像素点展开成一个1维向量输入网络,存在如下两个问题:**1. 输入数据的空间信息被丢失。** 空间上相邻的像素点往往具有相似的RGB值,RGB的各个通道之间的数据通常密切相关,但是转化成1维向量时,这些信息被丢失。同时,图像数据的形状信息中,可能隐藏着某种本质的模式,但是转变成1维向量输入全连接神经网络时,这些模式也会被忽略。**2. 模型参数过多,容易发生过拟合。** 在手写数字识别案例中,每个像素点都要跟所有输出的神经元相连接。当图片尺寸变大时,输入神经元的个数会按图片尺寸的平方增大,导致模型参数过多,容易发生过拟合。为了解决上述问题,我们引入卷积神经网络进行特征提取,既能提取到相邻像素点之间的特征模式,又能保证参数的个数不随图片尺寸变化。**图6** 是一个典型的卷积神经网络结构,多层卷积和池化层组合作用在输入图片上,在网络的最后通常会加入一系列全连接层。网络中通常还会加入Dropout来防止过拟合。图6:卷积神经网络经典结构------**说明:**在卷积神经网络中,计算范围是在像素点的空间邻域内进行的,卷积核参数的数目也远小于全连接层。卷积核本身与输入图片大小无关,它代表了对空间邻域内某种特征模式的提取。比如,有些卷积核提取物体边缘特征,有些卷积核提取物体拐角处的特征,图像上不同区域共享同一个卷积核。当输入图片大小不一样时,仍然可以使用同一个卷积核进行操作。------ 卷积(Convolution)这一小节将为读者介绍卷积算法的原理和实现方案,并通过具体的案例展示如何使用卷积对图片进行操作,主要涵盖如下内容:- 卷积计算- 填充(padding)- 步幅(stride)- 感受野(Receptive Field)- 多输入通道、多输出通道和批量操作- 飞桨卷积API介绍- 卷积算子应用举例 卷积计算卷积是数学分析中的一种积分变换的方法,在图像处理中采用的是卷积的离散形式。这里需要说明的是,在卷积神经网络中,卷积层的实现方式实际上是数学中定义的互相关 (cross-correlation)运算,与数学分析中的卷积定义有所不同,这里跟其他框架和卷积神经网络的教程保持一致,都使用互相关运算作为卷积的定义,具体的计算过程如 **图7** 所示。图7:卷积计算过程------**说明:**卷积核(kernel)也被叫做滤波器(filter),假设卷积核的高和宽分别为$k_h$和$k_w$,则将称为$k_h\times k_w$卷积,比如$3\times5$卷积,就是指卷积核的高为3, 宽为5。------ 如图7(a)所示:左边的图大小是$3\times3$,表示输入数据是一个维度为$3\times3$的二维数组;中间的图大小是$2\times2$,表示一个维度为$2\times2$的二维数组,我们将这个二维数组称为卷积核。先将卷积核的左上角与输入数据的左上角(即:输入数据的(0, 0)位置)对齐,把卷积核的每个元素跟其位置对应的输入数据中的元素相乘,再把所有乘积相加,得到卷积输出的第一个结果:$$0\times1 + 1\times2 + 2\times4 + 3\times5 = 25 \ \ \ \ \ \ \ (a)$$- 如图7(b)所示:将卷积核向右滑动,让卷积核左上角与输入数据中的(0,1)位置对齐,同样将卷积核的每个元素跟其位置对应的输入数据中的元素相乘,再把这4个乘积相加,得到卷积输出的第二个结果:$$0\times2 + 1\times3 + 2\times5 + 3\times6 = 31 \ \ \ \ \ \ \ (b)$$- 如图7(c)所示:将卷积核向下滑动,让卷积核左上角与输入数据中的(1, 0)位置对齐,可以计算得到卷积输出的第三个结果:$$0\times4 + 1\times5 + 2\times7 + 3\times8 = 43 \ \ \ \ \ \ \ (c)$$- 如图7(d)所示:将卷积核向右滑动,让卷积核左上角与输入数据中的(1, 1)位置对齐,可以计算得到卷积输出的第四个结果:$$0\times5 + 1\times6 + 2\times8 + 3\times9 = 49 \ \ \ \ \ \ \ (d)$$卷积核的计算过程可以用下面的数学公式表示,其中 $a$ 代表输入图片, $b$ 代表输出特征图,$w$ 是卷积核参数,它们都是二维数组,$\sum{u,v}{\ }$ 表示对卷积核参数进行遍历并求和。$$b[i, j] = \sum_{u,v}{a[i+u, j+v]\cdot w[u, v]}$$举例说明,假如上图中卷积核大小是$2\times 2$,则$u$可以取0和1,$v$也可以取0和1,也就是说:$$b[i, j] = a[i+0, j+0]\cdot w[0, 0] + a[i+0, j+1]\cdot w[0, 1] + a[i+1, j+0]\cdot w[1, 0] + a[i+1, j+1]\cdot w[1, 1]$$读者可以自行验证,当$[i, j]$取不同值时,根据此公式计算的结果与上图中的例子是否一致。- **【思考】 当卷积核大小为$3 \times 3$时,$b$和$a$之间的对应关系应该是怎样的?**------**其它说明:**在卷积神经网络中,一个卷积算子除了上面描述的卷积过程之外,还包括加上偏置项的操作。例如假设偏置为1,则上面卷积计算的结果为: $$0\times1 + 1\times2 + 2\times4 + 3\times5 \mathbf{\ + 1} = 26$$$$0\times2 + 1\times3 + 2\times5 + 3\times6 \mathbf{\ + 1} = 32$$$$0\times4 + 1\times5 + 2\times7 + 3\times8 \mathbf{\ + 1} = 44$$$$0\times5 + 1\times6 + 2\times8 + 3\times9 \mathbf{\ + 1} = 50$$------ 填充(padding)在上面的例子中,输入图片尺寸为$3\times3$,输出图片尺寸为$2\times2$,经过一次卷积之后,图片尺寸变小。卷积输出特征图的尺寸计算方法如下(卷积核的高和宽分别为$k_h$和$k_w$):$$H_{out} = H - k_h + 1$$$$W_{out} = W - k_w + 1$$如果输入尺寸为4,卷积核大小为3时,输出尺寸为$4-3+1=2$。读者可以自行检查当输入图片和卷积核为其他尺寸时,上述计算式是否成立。当卷积核尺寸大于1时,输出特征图的尺寸会小于输入图片尺寸。如果经过多次卷积,输出图片尺寸会不断减小。为了避免卷积之后图片尺寸变小,通常会在图片的外围进行填充(padding),如 **图8** 所示。 图8:图形填充 - 如图8(a)所示:填充的大小为1,填充值为0。填充之后,输入图片尺寸从$4\times4$变成了$6\times6$,使用$3\times3$的卷积核,输出图片尺寸为$4\times4$。- 如图8(b)所示:填充的大小为2,填充值为0。填充之后,输入图片尺寸从$4\times4$变成了$8\times8$,使用$3\times3$的卷积核,输出图片尺寸为$6\times6$。如果在图片高度方向,在第一行之前填充$p_{h1}$行,在最后一行之后填充$p_{h2}$行;在图片的宽度方向,在第1列之前填充$p_{w1}$列,在最后1列之后填充$p_{w2}$列;则填充之后的图片尺寸为$(H + p_{h1} + p_{h2})\times(W + p_{w1} + p_{w2})$。经过大小为$k_h\times k_w$的卷积核操作之后,输出图片的尺寸为:$$H_{out} = H + p_{h1} + p_{h2} - k_h + 1$$$$W_{out} = W + p_{w1} + p_{w2} - k_w + 1$$在卷积计算过程中,通常会在高度或者宽度的两侧采取等量填充,即$p_{h1} = p_{h2} = p_h,\ \ p_{w1} = p_{w2} = p_w$,上面计算公式也就变为:$$H_{out} = H + 2p_h - k_h + 1$$$$W_{out} = W + 2p_w - k_w + 1$$卷积核大小通常使用1,3,5,7这样的奇数,如果使用的填充大小为$p_h=(k_h-1)/2 ,p_w=(k_w-1)/2$,则卷积之后图像尺寸不变。例如当卷积核大小为3时,padding大小为1,卷积之后图像尺寸不变;同理,如果卷积核大小为5,padding大小为2,也能保持图像尺寸不变。 步幅(stride)**图8** 中卷积核每次滑动一个像素点,这是步幅为1的特殊情况。**图9** 是步幅为2的卷积过程,卷积核在图片上移动时,每次移动大小为2个像素点。图9:步幅为2的卷积过程 当宽和高方向的步幅分别为$s_h$和$s_w$时,输出特征图尺寸的计算公式是:$$H_{out} = \frac{H + 2p_h - k_h}{s_h} + 1$$$$W_{out} = \frac{W + 2p_w - k_w}{s_w} + 1$$假设输入图片尺寸是$H\times W = 100 \times 100$,卷积核大小$k_h \times k_w = 3 \times 3$,填充$p_h = p_w = 1$,步幅为$s_h = s_w = 2$,则输出特征图的尺寸为:$$H_{out} = \frac{100 + 2 - 3}{2} + 1 = 50$$$$W_{out} = \frac{100 + 2 - 3}{2} + 1 = 50$$ 感受野(Receptive Field)输出特征图上每个点的数值,是由输入图片上大小为$k_h\times k_w$的区域的元素与卷积核每个元素相乘再相加得到的,所以输入图像上$k_h\times k_w$区域内每个元素数值的改变,都会影响输出点的像素值。我们将这个区域叫做输出特征图上对应点的感受野。感受野内每个元素数值的变动,都会影响输出点的数值变化。比如$3\times3$卷积对应的感受野大小就是$3\times3$,如 **图10** 所示。图10:感受野为3×3的卷积 而当通过两层$3\times3$的卷积之后,感受野的大小将会增加到$5\times5$,如 **图11** 所示。图11:感受野为5×5的卷积 因此,当增加卷积网络深度的同时,感受野将会增大,输出特征图中的一个像素点将会包含更多的图像语义信息。 多输入通道、多输出通道和批量操作前面介绍的卷积计算过程比较简单,实际应用时,处理的问题要复杂的多。例如:对于彩色图片有RGB三个通道,需要处理多输入通道的场景。输出特征图往往也会具有多个通道,而且在神经网络的计算中常常是把一个批次的样本放在一起计算,所以卷积算子需要具有批量处理多输入和多输出通道数据的功能,下面将分别介绍这几种场景的操作方式。- **多输入通道场景**上面的例子中,卷积层的数据是一个2维数组,但实际上一张图片往往含有RGB三个通道,要计算卷积的输出结果,卷积核的形式也会发生变化。假设输入图片的通道数为$C_{in}$,输入数据的形状是$C_{in}\times{H_{in}}\times{W_{in}}$,计算过程如 **图12** 所示。1. 对每个通道分别设计一个2维数组作为卷积核,卷积核数组的形状是$C_{in}\times{k_h}\times{k_w}$。1. 对任一通道$C_{in} \in [0, C_{in})$,分别用大小为$k_h\times{k_w}$的卷积核在大小为$H_{in}\times{W_{in}}$的二维数组上做卷积。1. 将这$C_{in}$个通道的计算结果相加,得到的是一个形状为$H_{out}\times{W_{out}}$的二维数组。图12:多输入通道计算过程 - **多输出通道场景**上边我们介绍了只有一个卷积核时的计算方式,那么如果我们希望检测多种类型的特征,实际上我们可以使用多个卷积核进行计算。所以一般来说,卷积操作的输出特征图也会具有多个通道$C_{out}$,这时我们需要设计$C_{out}$个维度为$C_{in}\times{k_h}\times{k_w}$的卷积核,卷积核数组的维度是$C_{out}\times C_{in}\times{k_h}\times{k_w}$,如 **图13** 所示。1. 对任一输出通道$c_{out} \in [0, C_{out})$,分别使用上面描述的形状为$C_{in}\times{k_h}\times{k_w}$的卷积核对输入图片做卷积。1. 将这$C_{out}$个形状为$H_{out}\times{W_{out}}$的二维数组拼接在一起,形成维度为$C_{out}\times{H_{out}}\times{W_{out}}$的三维数组。------**说明:**通常将卷积核的输出通道数叫做卷积核的个数。------图13:多输出通道计算过程 - **批量操作**在卷积神经网络的计算中,通常将多个样本放在一起形成一个mini-batch进行批量操作,即输入数据的维度是$N\times{C_{in}}\times{H_{in}}\times{W_{in}}$。由于会对每张图片使用同样的卷积核进行卷积操作,卷积核的维度与上面多输出通道的情况一样,仍然是$C_{out}\times C_{in}\times{k_h}\times{k_w}$,输出特征图的维度是$N\times{C_{out}}\times{H_{out}}\times{W_{out}}$,如 **图14** 所示。图14:批量操作 飞桨卷积API介绍飞桨卷积算子对应的API是[paddle.nn.Conv2D](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-rc/api/paddle/nn/layer/conv/Conv2D_cn.html),用户可以直接调用API进行计算,也可以在此基础上修改。Conv2D名称中的“2D”表明卷积核是二维的,多用于处理图像数据。类似的,也有Conv3D可以用于处理视频数据(图像的序列)。> *class* paddle.nn.Conv2D (*in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', weight_attr=None, bias_attr=None, data_format='NCHW'*)常用的参数如下: - in_channels(int) - 输入图像的通道数。 - out_channels(int) - 卷积核的个数,和输出特征图通道数相同,相当于上文中的$C_{out}$。 - kernel_size(int|list|tuple) - 卷积核大小,可以是整数,比如3,表示卷积核的高和宽均为3 ;或者是两个整数的list,例如[3,2],表示卷积核的高为3,宽为2。 - stride(int|list|tuple,可选) - 步长大小,可以是整数,默认值为1,表示垂直和水平滑动步幅均为1;或者是两个整数的list,例如[3,2],表示垂直滑动步幅为3,水平滑动步幅为2。 - padding(int|list|tuple|str,可选) - 填充大小,可以是整数,比如1,表示竖直和水平边界填充大小均为1;或者是两个整数的list,例如[2,1],表示竖直边界填充大小为2,水平边界填充大小为1。输入数据维度$[N, C_{in}, H_{in}, W_{in}]$,输出数据维度$[N, out\_channels, H_{out}, W_{out}]$,权重参数$w$的维度$[out\_channels, C_{in}, filter\_size\_h, filter\_size\_w]$,偏置参数$b$的维度是$[out\_channels]$。注意,即使输入只有一张灰度图片$[H_{in}, W_{in}]$,也需要处理成四个维度的输入向量$[1, 1, H_{in}, W_{in}]$。 卷积算子应用举例下面介绍卷积算子在图片中应用的三个案例,并观察其计算结果。**案例1——简单的黑白边界检测**下面是使用Conv2D算子完成一个图像边界检测的任务。图像左边为光亮部分,右边为黑暗部分,需要检测出光亮跟黑暗的分界处。设置宽度方向的卷积核为$[1, 0, -1]$,此卷积核会将宽度方向间隔为1的两个像素点的数值相减。当卷积核在图片上滑动时,如果它所覆盖的像素点位于亮度相同的区域,则左右间隔为1的两个像素点数值的差为0。只有当卷积核覆盖的像素点有的处于光亮区域,有的处在黑暗区域时,左右间隔为1的两个点像素值的差才不为0。将此卷积核作用到图片上,输出特征图上只有对应黑白分界线的地方像素值才不为0。具体代码如下所示,结果输出在下方的图案中。 | import matplotlib.pyplot as plt
import numpy as np
import paddle
from paddle.nn import Conv2D
from paddle.nn.initializer import Assign
%matplotlib inline
# 创建初始化权重参数w
w = np.array([1, 0, -1], dtype='float32')
# 将权重参数调整成维度为[cout, cin, kh, kw]的四维张量
w = w.reshape([1, 1, 1, 3])
# 创建卷积算子,设置输出通道数,卷积核大小,和初始化权重参数
# kernel_size = [1, 3]表示kh = 1, kw=3
# 创建卷积算子的时候,通过参数属性weight_attr指定参数初始化方式
# 这里的初始化方式时,从numpy.ndarray初始化卷积参数
conv = Conv2D(in_channels=1, out_channels=1, kernel_size=[1, 3],
weight_attr=paddle.ParamAttr(
initializer=Assign(value=w)))
# 创建输入图片,图片左边的像素点取值为1,右边的像素点取值为0
img = np.ones([50,50], dtype='float32')
img[:, 30:] = 0.
# 将图片形状调整为[N, C, H, W]的形式
x = img.reshape([1,1,50,50])
# 将numpy.ndarray转化成paddle中的tensor
x = paddle.to_tensor(x)
# 使用卷积算子作用在输入图片上
y = conv(x)
# 将输出tensor转化为numpy.ndarray
out = y.numpy()
f = plt.subplot(121)
f.set_title('input image', fontsize=15)
plt.imshow(img, cmap='gray')
f = plt.subplot(122)
f.set_title('output featuremap', fontsize=15)
# 卷积算子Conv2D输出数据形状为[N, C, H, W]形式
# 此处N, C=1,输出数据形状为[1, 1, H, W],是4维数组
# 但是画图函数plt.imshow画灰度图时,只接受2维数组
# 通过numpy.squeeze函数将大小为1的维度消除
plt.imshow(out.squeeze(), cmap='gray')
plt.show()
# 查看卷积层的权重参数名字和数值
print(conv.weight)
# 参看卷积层的偏置参数名字和数值
print(conv.bias) | Parameter containing:
Tensor(shape=[1, 1, 1, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
[[[[ 1., 0., -1.]]]])
Parameter containing:
Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=False,
[0.])
| Apache-2.0 | junior_class/chapter-3-Computer_Vision/notebook/3-1-CV-CNN_Basis.ipynb | CS-Learnings/PaddlePaddle-awesome-DeepLearning |
**案例2——图像中物体边缘检测**上面展示的是一个人为构造出来的简单图片,使用卷积网络检测图片明暗分界处的示例。对于真实的图片,也可以使用合适的卷积核(3\*3卷积核的中间值是8,周围一圈的值是8个-1)对其进行操作,用来检测物体的外形轮廓,观察输出特征图跟原图之间的对应关系,如下代码所示: | import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
import paddle
from paddle.nn import Conv2D
from paddle.nn.initializer import Assign
img = Image.open('./work/images/section1/000000098520.jpg')
# 设置卷积核参数
w = np.array([[-1,-1,-1], [-1,8,-1], [-1,-1,-1]], dtype='float32')/8
w = w.reshape([1, 1, 3, 3])
# 由于输入通道数是3,将卷积核的形状从[1,1,3,3]调整为[1,3,3,3]
w = np.repeat(w, 3, axis=1)
# 创建卷积算子,输出通道数为1,卷积核大小为3x3,
# 并使用上面的设置好的数值作为卷积核权重的初始化参数
conv = Conv2D(in_channels=3, out_channels=1, kernel_size=[3, 3],
weight_attr=paddle.ParamAttr(
initializer=Assign(value=w)))
# 将读入的图片转化为float32类型的numpy.ndarray
x = np.array(img).astype('float32')
# 图片读入成ndarry时,形状是[H, W, 3],
# 将通道这一维度调整到最前面
x = np.transpose(x, (2,0,1))
# 将数据形状调整为[N, C, H, W]格式
x = x.reshape(1, 3, img.height, img.width)
x = paddle.to_tensor(x)
y = conv(x)
out = y.numpy()
plt.figure(figsize=(20, 10))
f = plt.subplot(121)
f.set_title('input image', fontsize=15)
plt.imshow(img)
f = plt.subplot(122)
f.set_title('output feature map', fontsize=15)
plt.imshow(out.squeeze(), cmap='gray')
plt.show()
| _____no_output_____ | Apache-2.0 | junior_class/chapter-3-Computer_Vision/notebook/3-1-CV-CNN_Basis.ipynb | CS-Learnings/PaddlePaddle-awesome-DeepLearning |
**案例3——图像均值模糊**另外一种比较常见的卷积核(5\*5的卷积核中每个值均为1)是用当前像素跟它邻域内的像素取平均,这样可以使图像上噪声比较大的点变得更平滑,如下代码所示: | import paddle
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
from paddle.nn import Conv2D
from paddle.nn.initializer import Assign
# 读入图片并转成numpy.ndarray
# 换成灰度图
img = Image.open('./work/images/section1/000000355610.jpg').convert('L')
img = np.array(img)
# 创建初始化参数
w = np.ones([1, 1, 5, 5], dtype = 'float32')/25
conv = Conv2D(in_channels=1, out_channels=1, kernel_size=[5, 5],
weight_attr=paddle.ParamAttr(
initializer=Assign(value=w)))
x = img.astype('float32')
x = x.reshape(1,1,img.shape[0], img.shape[1])
x = paddle.to_tensor(x)
y = conv(x)
out = y.numpy()
plt.figure(figsize=(20, 12))
f = plt.subplot(121)
f.set_title('input image')
plt.imshow(img, cmap='gray')
f = plt.subplot(122)
f.set_title('output feature map')
out = out.squeeze()
plt.imshow(out, cmap='gray')
plt.show() | _____no_output_____ | Apache-2.0 | junior_class/chapter-3-Computer_Vision/notebook/3-1-CV-CNN_Basis.ipynb | CS-Learnings/PaddlePaddle-awesome-DeepLearning |
池化(Pooling)池化是使用某一位置的相邻输出的总体统计特征代替网络在该位置的输出,其好处是当输入数据做出少量平移时,经过池化函数后的大多数输出还能保持不变。比如:当识别一张图像是否是人脸时,我们需要知道人脸左边有一只眼睛,右边也有一只眼睛,而不需要知道眼睛的精确位置,这时候通过池化某一片区域的像素点来得到总体统计特征会显得很有用。由于池化之后特征图会变得更小,如果后面连接的是全连接层,能有效的减小神经元的个数,节省存储空间并提高计算效率。如 **图15** 所示,将一个$2\times 2$的区域池化成一个像素点。通常有两种方法,平均池化和最大池化。图15:池化 - 如图15(a):平均池化。这里使用大小为$2\times2$的池化窗口,每次移动的步幅为2,对池化窗口覆盖区域内的像素取平均值,得到相应的输出特征图的像素值。- 如图15(b):最大池化。对池化窗口覆盖区域内的像素取最大值,得到输出特征图的像素值。当池化窗口在图片上滑动时,会得到整张输出特征图。池化窗口的大小称为池化大小,用$k_h \times k_w$表示。在卷积神经网络中用的比较多的是窗口大小为$2 \times 2$,步幅为2的池化。与卷积核类似,池化窗口在图片上滑动时,每次移动的步长称为步幅,当宽和高方向的移动大小不一样时,分别用$s_w$和$s_h$表示。也可以对需要进行池化的图片进行填充,填充方式与卷积类似,假设在第一行之前填充$p_{h1}$行,在最后一行后面填充$p_{h2}$行。在第一列之前填充$p_{w1}$列,在最后一列之后填充$p_{w2}$列,则池化层的输出特征图大小为:$$H_{out} = \frac{H + p_{h1} + p_{h2} - k_h}{s_h} + 1$$$$W_{out} = \frac{W + p_{w1} + p_{w2} - k_w}{s_w} + 1$$在卷积神经网络中,通常使用$2\times2$大小的池化窗口,步幅也使用2,填充为0,则输出特征图的尺寸为:$$H_{out} = \frac{H}{2}$$$$W_{out} = \frac{W}{2}$$通过这种方式的池化,输出特征图的高和宽都减半,但通道数不会改变。 批归一化(Batch Normalization)[批归一化方法](https://arxiv.org/abs/1502.03167)(Batch Normalization,BatchNorm)是由Ioffe和Szegedy于2015年提出的,已被广泛应用在深度学习中,其目的是对神经网络中间层的输出进行标准化处理,使得中间层的输出更加稳定。通常我们会对神经网络的数据进行标准化处理,处理后的样本数据集满足均值为0,方差为1的统计分布,这是因为当输入数据的分布比较固定时,有利于算法的稳定和收敛。对于深度神经网络来说,由于参数是不断更新的,即使输入数据已经做过标准化处理,但是对于比较靠后的那些层,其接收到的输入仍然是剧烈变化的,通常会导致数值不稳定,模型很难收敛。BatchNorm能够使神经网络中间层的输出变得更加稳定,并有如下三个优点:- 使学习快速进行(能够使用较大的学习率) - 降低模型对初始值的敏感性 - 从一定程度上抑制过拟合BatchNorm主要思路是在训练时以mini-batch为单位,对神经元的数值进行归一化,使数据的分布满足均值为0,方差为1。具体计算过程如下:**1. 计算mini-batch内样本的均值**$$\mu_B \leftarrow \frac{1}{m}\sum_{i=1}^mx^{(i)}$$其中$x^{(i)}$表示mini-batch中的第$i$个样本。例如输入mini-batch包含3个样本,每个样本有2个特征,分别是:$$x^{(1)} = (1,2), \ \ x^{(2)} = (3,6), \ \ x^{(3)} = (5,10)$$对每个特征分别计算mini-batch内样本的均值:$$\mu_{B0} = \frac{1+3+5}{3} = 3, \ \ \ \mu_{B1} = \frac{2+6+10}{3} = 6$$则样本均值是:$$\mu_{B} = (\mu_{B0}, \mu_{B1}) = (3, 6)$$**2. 计算mini-batch内样本的方差**$$\sigma_B^2 \leftarrow \frac{1}{m}\sum_{i=1}^m(x^{(i)} - \mu_B)^2$$上面的计算公式先计算一个批次内样本的均值$\mu_B$和方差$\sigma_B^2$,然后再对输入数据做归一化,将其调整成均值为0,方差为1的分布。对于上述给定的输入数据$x^{(1)}, x^{(2)}, x^{(3)}$,可以计算出每个特征对应的方差:$$\sigma_{B0}^2 = \frac{1}{3} \cdot ((1-3)^2 + (3-3)^2 + (5-3)^2) = \frac{8}{3}$$$$\sigma_{B1}^2 = \frac{1}{3} \cdot ((2-6)^2 + (6-6)^2 + (10-6)^2) = \frac{32}{3}$$则样本方差是:$$\sigma_{B}^2 = (\sigma_{B0}^2, \sigma_{B1}^2) = (\frac{8}{3}, \frac{32}{3})$$**3. 计算标准化之后的输出**$$\hat{x}^{(i)} \leftarrow \frac{x^{(i)} - \mu_B}{\sqrt{(\sigma_B^2 + \epsilon)}}$$其中$\epsilon$是一个微小值(例如$1e-7$),其主要作用是为了防止分母为0。对于上述给定的输入数据$x^{(1)}, x^{(2)}, x^{(3)}$,可以计算出标准化之后的输出:$$\hat{x}^{(1)} = (\frac{1 - 3}{\sqrt{\frac{8}{3}}}, \ \ \frac{2 - 6}{\sqrt{\frac{32}{3}}}) = (-\sqrt{\frac{3}{2}}, \ \ -\sqrt{\frac{3}{2}})$$$$\hat{x}^{(2)} = (\frac{3 - 3}{\sqrt{\frac{8}{3}}}, \ \ \frac{6 - 6}{\sqrt{\frac{32}{3}}}) = (0, \ \ 0) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$$$\hat{x}^{(3)} = (\frac{5 - 3}{\sqrt{\frac{8}{3}}}, \ \ \frac{10 - 6}{\sqrt{\frac{32}{3}}}) = (\sqrt{\frac{3}{2}}, \ \ \sqrt{\frac{3}{2}}) \ \ \ \ $$- 读者可以自行验证由$\hat{x}^{(1)}, \hat{x}^{(2)}, \hat{x}^{(3)}$构成的mini-batch,是否满足均值为0,方差为1的分布。如果强行限制输出层的分布是标准化的,可能会导致某些特征模式的丢失,所以在标准化之后,BatchNorm会紧接着对数据做缩放和平移。$$y_i \leftarrow \gamma \hat{x_i} + \beta$$其中$\gamma$和$\beta$是可学习的参数,可以赋初始值$\gamma = 1, \beta = 0$,在训练过程中不断学习调整。上面列出的是BatchNorm方法的计算逻辑,下面针对两种类型的输入数据格式分别进行举例。飞桨支持输入数据的维度大小为2、3、4、5四种情况,这里给出的是维度大小为2和4的示例。* **示例一:** 当输入数据形状是$[N, K]$时,一般对应全连接层的输出,示例代码如下所示。 这种情况下会分别对K的每一个分量计算N个样本的均值和方差,数据和参数对应如下:- 输入 x, [N, K]- 输出 y, [N, K]- 均值 $\mu_B$,[K, ]- 方差 $\sigma_B^2$, [K, ]- 缩放参数$\gamma$, [K, ]- 平移参数$\beta$, [K, ] | # 输入数据形状是 [N, K]时的示例
import numpy as np
import paddle
from paddle.nn import BatchNorm1D
# 创建数据
data = np.array([[1,2,3], [4,5,6], [7,8,9]]).astype('float32')
# 使用BatchNorm1D计算归一化的输出
# 输入数据维度[N, K],num_features等于K
bn = BatchNorm1D(num_features=3)
x = paddle.to_tensor(data)
y = bn(x)
print('output of BatchNorm1D Layer: \n {}'.format(y.numpy()))
# 使用Numpy计算均值、方差和归一化的输出
# 这里对第0个特征进行验证
a = np.array([1,4,7])
a_mean = a.mean()
a_std = a.std()
b = (a - a_mean) / a_std
print('std {}, mean {}, \n output {}'.format(a_mean, a_std, b))
# 建议读者对第1和第2个特征进行验证,观察numpy计算结果与paddle计算结果是否一致 | output of BatchNorm1D Layer:
[[-1.2247438 -1.2247438 -1.2247438]
[ 0. 0. 0. ]
[ 1.2247438 1.2247438 1.2247438]]
std 4.0, mean 2.449489742783178,
output [-1.22474487 0. 1.22474487]
| Apache-2.0 | junior_class/chapter-3-Computer_Vision/notebook/3-1-CV-CNN_Basis.ipynb | CS-Learnings/PaddlePaddle-awesome-DeepLearning |
* **示例二:** 当输入数据形状是$[N, C, H, W]$时, 一般对应卷积层的输出,示例代码如下所示。这种情况下会沿着C这一维度进行展开,分别对每一个通道计算N个样本中总共$N\times H \times W$个像素点的均值和方差,数据和参数对应如下:- 输入 x, [N, C, H, W]- 输出 y, [N, C, H, W]- 均值 $\mu_B$,[C, ]- 方差 $\sigma_B^2$, [C, ]- 缩放参数$\gamma$, [C, ]- 平移参数$\beta$, [C, ]------**小窍门:**可能有读者会问:“BatchNorm里面不是还要对标准化之后的结果做仿射变换吗,怎么使用Numpy计算的结果与BatchNorm算子一致?” 这是因为BatchNorm算子里面自动设置初始值$\gamma = 1, \beta = 0$,这时候仿射变换相当于是恒等变换。在训练过程中这两个参数会不断的学习,这时仿射变换就会起作用。------ | # 输入数据形状是[N, C, H, W]时的batchnorm示例
import numpy as np
import paddle
from paddle.nn import BatchNorm2D
# 设置随机数种子,这样可以保证每次运行结果一致
np.random.seed(100)
# 创建数据
data = np.random.rand(2,3,3,3).astype('float32')
# 使用BatchNorm2D计算归一化的输出
# 输入数据维度[N, C, H, W],num_features等于C
bn = BatchNorm2D(num_features=3)
x = paddle.to_tensor(data)
y = bn(x)
print('input of BatchNorm2D Layer: \n {}'.format(x.numpy()))
print('output of BatchNorm2D Layer: \n {}'.format(y.numpy()))
# 取出data中第0通道的数据,
# 使用numpy计算均值、方差及归一化的输出
a = data[:, 0, :, :]
a_mean = a.mean()
a_std = a.std()
b = (a - a_mean) / a_std
print('channel 0 of input data: \n {}'.format(a))
print('std {}, mean {}, \n output: \n {}'.format(a_mean, a_std, b))
# 提示:这里通过numpy计算出来的输出
# 与BatchNorm2D算子的结果略有差别,
# 因为在BatchNorm2D算子为了保证数值的稳定性,
# 在分母里面加上了一个比较小的浮点数epsilon=1e-05 | input of BatchNorm2D Layer:
[[[[0.54340494 0.2783694 0.4245176 ]
[0.84477615 0.00471886 0.12156912]
[0.67074907 0.82585275 0.13670659]]
[[0.5750933 0.89132196 0.20920213]
[0.18532822 0.10837689 0.21969749]
[0.9786238 0.8116832 0.17194101]]
[[0.81622475 0.27407375 0.4317042 ]
[0.9400298 0.81764936 0.33611196]
[0.17541045 0.37283206 0.00568851]]]
[[[0.25242636 0.7956625 0.01525497]
[0.5988434 0.6038045 0.10514768]
[0.38194343 0.03647606 0.89041156]]
[[0.98092085 0.05994199 0.89054596]
[0.5769015 0.7424797 0.63018394]
[0.5818422 0.02043913 0.21002658]]
[[0.5446849 0.76911515 0.25069523]
[0.2858957 0.8523951 0.9750065 ]
[0.8848533 0.35950786 0.59885895]]]]
output of BatchNorm2D Layer:
[[[[ 0.41260773 -0.46198368 0.02029113]
[ 1.4071033 -1.3650038 -0.9794093 ]
[ 0.83283097 1.344658 -0.9294571 ]]
[[ 0.25201762 1.2038352 -0.8492796 ]
[-0.92113775 -1.1527538 -0.81768954]
[ 1.4666054 0.9641302 -0.9614319 ]]
[[ 0.9541145 -0.9075854 -0.366296 ]
[ 1.3792504 0.9590065 -0.69455147]
[-1.2463866 -0.56845784 -1.8291972 ]]]
[[[-0.5475932 1.2450331 -1.3302356 ]
[ 0.5955492 0.6119205 -1.0335984 ]
[-0.12019946 -1.2602081 1.5576957 ]]
[[ 1.4735192 -1.2985382 1.2014996 ]
[ 0.25746003 0.75583434 0.41783503]
[ 0.272331 -1.4174379 -0.84679806]]
[[ 0.02166999 0.7923442 -0.9878652 ]
[-0.8669898 1.0783204 1.4993575 ]
[ 1.189779 -0.614212 0.20769906]]]]
channel 0 of input data:
[[[0.54340494 0.2783694 0.4245176 ]
[0.84477615 0.00471886 0.12156912]
[0.67074907 0.82585275 0.13670659]]
[[0.25242636 0.7956625 0.01525497]
[0.5988434 0.6038045 0.10514768]
[0.38194343 0.03647606 0.89041156]]]
std 0.4183686077594757, mean 0.3030227720737457,
output:
[[[ 0.41263014 -0.46200886 0.02029219]
[ 1.4071798 -1.3650781 -0.9794626 ]
[ 0.8328762 1.3447311 -0.92950773]]
[[-0.54762304 1.2451009 -1.3303081 ]
[ 0.5955816 0.61195374 -1.0336547 ]
[-0.12020606 -1.2602768 1.5577804 ]]]
| Apache-2.0 | junior_class/chapter-3-Computer_Vision/notebook/3-1-CV-CNN_Basis.ipynb | CS-Learnings/PaddlePaddle-awesome-DeepLearning |
**- 预测时使用BatchNorm**上面介绍了在训练过程中使用BatchNorm对一批样本进行归一化的方法,但如果使用同样的方法对需要预测的一批样本进行归一化,则预测结果会出现不确定性。例如样本A、样本B作为一批样本计算均值和方差,与样本A、样本C和样本D作为一批样本计算均值和方差,得到的结果一般来说是不同的。那么样本A的预测结果就会变得不确定,这对预测过程来说是不合理的。解决方法是在训练过程中将大量样本的均值和方差保存下来,预测时直接使用保存好的值而不再重新计算。实际上,在BatchNorm的具体实现中,训练时会计算均值和方差的移动平均值。在飞桨中,默认是采用如下方式计算:$$saved\_\mu_B \leftarrow \ saved\_\mu_B \times 0.9 + \mu_B \times (1 - 0.9)$$$$saved\_\sigma_B^2 \leftarrow \ saved\_\sigma_B^2 \times 0.9 + \sigma_B^2 \times (1 - 0.9)$$在训练过程的最开始将$saved\_\mu_B$和$saved\_\sigma_B^2$设置为0,每次输入一批新的样本,计算出$\mu_B$和$\sigma_B^2$,然后通过上面的公式更新$saved\_\mu_B$和$saved\_\sigma_B^2$,在训练的过程中不断的更新它们的值,并作为BatchNorm层的参数保存下来。预测的时候将会加载参数$saved\_\mu_B$和$saved\_\sigma_B^2$,用他们来代替$\mu_B$和$\sigma_B^2$。 丢弃法(Dropout)丢弃法(Dropout)是深度学习中一种常用的抑制过拟合的方法,其做法是在神经网络学习过程中,随机删除一部分神经元。训练时,随机选出一部分神经元,将其输出设置为0,这些神经元将不对外传递信号。**图16** 是Dropout示意图,左边是完整的神经网络,右边是应用了Dropout之后的网络结构。应用Dropout之后,会将标了$\times$的神经元从网络中删除,让它们不向后面的层传递信号。在学习过程中,丢弃哪些神经元是随机决定,因此模型不会过度依赖某些神经元,能一定程度上抑制过拟合。图16 Dropout示意图 在预测场景时,会向前传递所有神经元的信号,可能会引出一个新的问题:训练时由于部分神经元被随机丢弃了,输出数据的总大小会变小。比如:计算其$L1$范数会比不使用Dropout时变小,但是预测时却没有丢弃神经元,这将导致训练和预测时数据的分布不一样。为了解决这个问题,飞桨支持如下两种方法:- **downscale_in_infer**训练时以比例$r$随机丢弃一部分神经元,不向后传递它们的信号;预测时向后传递所有神经元的信号,但是将每个神经元上的数值乘以 $(1 - r)$。- **upscale_in_train**训练时以比例$r$随机丢弃一部分神经元,不向后传递它们的信号,但是将那些被保留的神经元上的数值除以 $(1 - r)$;预测时向后传递所有神经元的信号,不做任何处理。在飞桨[Dropout API](https://www.paddlepaddle.org.cn/documentation/docs/en/2.0-rc/api/paddle/nn/layer/common/Dropout_en.htmldropout)中,通过mode参数来指定用哪种方式对神经元进行操作,> paddle.nn.Dropout(p=0.5, axis=None, mode="upscale_in_train”, name=None)主要参数如下:- p (float) :将输入节点置为0的概率,即丢弃概率,默认值:0.5。该参数对元素的丢弃概率是针对于每一个元素而言,而不是对所有的元素而言。举例说,假设矩阵内有12个数字,经过概率为0.5的dropout未必一定有6个零。- mode(str) :丢弃法的实现方式,有'downscale_in_infer'和'upscale_in_train'两种,默认是'upscale_in_train'。------**说明:**不同框架对于Dropout的默认处理方式可能不同,读者可以查看API详细了解。------下面这段程序展示了经过Dropout之后输出数据的形式。 | # dropout操作
import paddle
import numpy as np
# 设置随机数种子,这样可以保证每次运行结果一致
np.random.seed(100)
# 创建数据[N, C, H, W],一般对应卷积层的输出
data1 = np.random.rand(2,3,3,3).astype('float32')
# 创建数据[N, K],一般对应全连接层的输出
data2 = np.arange(1,13).reshape([-1, 3]).astype('float32')
# 使用dropout作用在输入数据上
x1 = paddle.to_tensor(data1)
# downgrade_in_infer模式下
drop11 = paddle.nn.Dropout(p = 0.5, mode = 'downscale_in_infer')
droped_train11 = drop11(x1)
# 切换到eval模式。在动态图模式下,使用eval()切换到求值模式,该模式禁用了dropout。
drop11.eval()
droped_eval11 = drop11(x1)
# upscale_in_train模式下
drop12 = paddle.nn.Dropout(p = 0.5, mode = 'upscale_in_train')
droped_train12 = drop12(x1)
# 切换到eval模式
drop12.eval()
droped_eval12 = drop12(x1)
x2 = paddle.to_tensor(data2)
drop21 = paddle.nn.Dropout(p = 0.5, mode = 'downscale_in_infer')
droped_train21 = drop21(x2)
# 切换到eval模式
drop21.eval()
droped_eval21 = drop21(x2)
drop22 = paddle.nn.Dropout(p = 0.5, mode = 'upscale_in_train')
droped_train22 = drop22(x2)
# 切换到eval模式
drop22.eval()
droped_eval22 = drop22(x2)
print('x1 {}, \n droped_train11 \n {}, \n droped_eval11 \n {}'.format(data1, droped_train11.numpy(), droped_eval11.numpy()))
print('x1 {}, \n droped_train12 \n {}, \n droped_eval12 \n {}'.format(data1, droped_train12.numpy(), droped_eval12.numpy()))
print('x2 {}, \n droped_train21 \n {}, \n droped_eval21 \n {}'.format(data2, droped_train21.numpy(), droped_eval21.numpy()))
print('x2 {}, \n droped_train22 \n {}, \n droped_eval22 \n {}'.format(data2, droped_train22.numpy(), droped_eval22.numpy())) | _____no_output_____ | Apache-2.0 | junior_class/chapter-3-Computer_Vision/notebook/3-1-CV-CNN_Basis.ipynb | CS-Learnings/PaddlePaddle-awesome-DeepLearning |
从上述代码的输出可以发现,经过dropout之后,tensor中的某些元素变为了0,这个就是dropout实现的功能,通过随机将输入数据的元素置0,消除减弱了神经元节点间的联合适应性,增强模型的泛化能力。 小结学习完这些概念,您就具备了搭建卷积神经网络的基础。下一节,我们将应用这些基础模块,一起完成图像分类中的典型应用 — 医疗图像中的眼疾筛查任务的模型搭建。 作业 1 计算卷积中一共有多少次乘法和加法操作输入数据形状是$[10, 3, 224, 224]$,卷积核$k_h = k_w = 3$,输出通道数为$64$,步幅$stride=1$,填充$p_h = p_w = 1$。则完成这样一个卷积,一共需要做多少次乘法和加法操作?- 提示先看输出一个像素点需要做多少次乘法和加法操作,然后再计算总共需要的操作次数。- 提交方式请回复乘法和加法操作的次数,例如:乘法1000,加法1000 2 计算网络层的输出数据和参数的形状网络结构定义如下面的代码所示,输入数据形状是$[10, 3, 224, 224]$,请分别计算每一层的输出数据形状,以及各层包含的参数形状 | # 定义 SimpleNet 网络结构
import paddle
from paddle.nn import Conv2D, MaxPool2D, Linear
import paddle.nn.functional as F
class SimpleNet(paddle.nn.Layer):
def __init__(self, num_classes=1):
#super(SimpleNet, self).__init__(name_scope)
self.conv1 = Conv2D(in_channels=3, out_channels=6, kernel_size=5, stride=1, padding=2)
self.max_pool1 = MaxPool2D(kernel_size=2, tride=2)
self.conv2 = Conv2D(in_channels=6, out_channels=16, kernel_size=5, stride=1, padding=2)
self.max_pool2 = MaxPool2D(kernel_size=2, tride=2)
self.fc1 = Linear(in_features=50176, out_features=64)
self.fc2 = Linear(in_features=64, out_features=num_classes)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.max_pool1(x)
x = self.conv2(x)
x = F.relu(x)
x = self.max_pool2(x)
x = paddle.reshape(x, [x.shape[0], -1])
x = self.fc1(x)
x = F.sigmoid(x)
x = self.fc2(x)
return x
| _____no_output_____ | Apache-2.0 | junior_class/chapter-3-Computer_Vision/notebook/3-1-CV-CNN_Basis.ipynb | CS-Learnings/PaddlePaddle-awesome-DeepLearning |
Transfer Learning Template | %load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform | _____no_output_____ | MIT | experiments/tl_1v2/wisig-oracle.run1.limited/trials/16/trial.ipynb | stevester94/csc500-notebooks |
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean | required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:wisig-oracle.run1.limited",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains": [1, 2, 3, 4],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag"],
"episode_transforms": [],
"domain_prefix": "Wisig_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag"],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1",
},
],
"dataset_seed": 154325,
"seed": 154325,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment) | _____no_output_____ | MIT | experiments/tl_1v2/wisig-oracle.run1.limited/trials/16/trial.ipynb | stevester94/csc500-notebooks |
**This notebook is an exercise in the [Data Cleaning](https://www.kaggle.com/learn/data-cleaning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/parsing-dates).**--- In this exercise, you'll apply what you learned in the **Parsing dates** tutorial. SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system. | from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex3 import *
print("Setup Complete") | Setup Complete
| MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
Get our environment set upThe first thing we'll need to do is load in the libraries and dataset we'll be using. We'll be working with a dataset containing information on earthquakes that occured between 1965 and 2016. | # modules we'll use
import pandas as pd
import numpy as np
import seaborn as sns
import datetime
# read in our data
earthquakes = pd.read_csv("../input/earthquake-database/database.csv")
# set seed for reproducibility
np.random.seed(0) | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
1) Check the data type of our date columnYou'll be working with the "Date" column from the `earthquakes` dataframe. Investigate this column now: does it look like it contains dates? What is the dtype of the column? | # TODO: Your code here!
earthquakes['Date'].dtype | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
Once you have answered the question above, run the code cell below to get credit for your work. | # Check your answer (Run this code cell to receive credit!)
q1.check()
# Line below will give you a hint
#q1.hint() | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
2) Convert our date columns to datetimeMost of the entries in the "Date" column follow the same format: "month/day/four-digit year". However, the entry at index 3378 follows a completely different pattern. Run the code cell below to see this. | earthquakes[3378:3383] | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
This does appear to be an issue with data entry: ideally, all entries in the column have the same format. We can get an idea of how widespread this issue is by checking the length of each entry in the "Date" column. | date_lengths = earthquakes.Date.str.len()
date_lengths.value_counts() | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
Looks like there are two more rows that has a date in a different format. Run the code cell below to obtain the indices corresponding to those rows and print the data. | indices = np.where([date_lengths == 24])[1]
print('Indices with corrupted data:', indices)
earthquakes.loc[indices] | Indices with corrupted data: [ 3378 7512 20650]
| MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
Given all of this information, it's your turn to create a new column "date_parsed" in the `earthquakes` dataset that has correctly parsed dates in it. **Note**: When completing this problem, you are allowed to (but are not required to) amend the entries in the "Date" and "Time" columns. Do not remove any rows from the dataset. | # TODO: Your code here
date_format = '%m/%d/%Y'
earthquakes.loc[indices,'Date'] = pd.to_datetime(earthquakes.loc[indices,'Date']) \
.dt.strftime(date_format)
earthquakes['date_parsed'] = pd.to_datetime(earthquakes['Date'])
# Check your answer
q2.check()
# Lines below will give you a hint or solution code
#q2.hint()
#q2.solution() | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.