markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Now after, even just a few training iterations, we can already see that the model is making progress on the task. | plt.figure()
plt.ylabel("Loss")
plt.xlabel("Training Steps")
plt.ylim([0,2])
plt.plot(batch_stats_callback.batch_losses)
plt.figure()
plt.ylabel("Accuracy")
plt.xlabel("Training Steps")
plt.ylim([0,1])
plt.plot(batch_stats_callback.batch_acc) | _____no_output_____ | Apache-2.0 | site/en/tutorials/images/transfer_learning_with_hub.ipynb | miried/tensorflow-docs |
Check the predictionsTo redo the plot from before, first get the ordered list of class names: | class_names = sorted(image_data.class_indices.items(), key=lambda pair:pair[1])
class_names = np.array([key.title() for key, value in class_names])
class_names | _____no_output_____ | Apache-2.0 | site/en/tutorials/images/transfer_learning_with_hub.ipynb | miried/tensorflow-docs |
Run the image batch through the model and convert the indices to class names. | predicted_batch = model.predict(image_batch)
predicted_id = np.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id] | _____no_output_____ | Apache-2.0 | site/en/tutorials/images/transfer_learning_with_hub.ipynb | miried/tensorflow-docs |
Plot the result | label_id = np.argmax(label_batch, axis=-1)
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "green" if predicted_id[n] == label_id[n] else "red"
plt.title(predicted_label_batch[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (green: correct, red: incorrect)") | _____no_output_____ | Apache-2.0 | site/en/tutorials/images/transfer_learning_with_hub.ipynb | miried/tensorflow-docs |
Export your modelNow that you've trained the model, export it as a SavedModel for use later on. | t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
model.save(export_path)
export_path | _____no_output_____ | Apache-2.0 | site/en/tutorials/images/transfer_learning_with_hub.ipynb | miried/tensorflow-docs |
Now confirm that we can reload it, and it still gives the same results: | reloaded = tf.keras.models.load_model(export_path)
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
abs(reloaded_result_batch - result_batch).max() | _____no_output_____ | Apache-2.0 | site/en/tutorials/images/transfer_learning_with_hub.ipynb | miried/tensorflow-docs |
any function that's passed to a multiprocessing function must be defined globally, even the callback function size decompressed = 3.7 * compressed Config | # Reload all src modules every time before executing the Python code typed
%load_ext autoreload
%autoreload 2
import os
import sys
import json
import cProfile
import pandas as pd
import geopandas as geopd
import numpy as np
import multiprocessing as mp
try:
import cld3
except ModuleNotFoundError:
pass
import pycld2
from shapely.geometry import MultiPolygon
from shapely.geometry import Polygon
from shapely.geometry import Point
import matplotlib.cm as csshm
import matplotlib.pyplot as plt
import descartes
import datetime
import src.utils.geometry as geo
import src.utils.places_to_cells as places_to_cells
import src.utils.join_and_count as join_and_count
import src.utils.make_config as make_config
import src.data.shp_extract as shp_extract
import src.data.text_process as text_process
import src.data.access as data_access
import src.data.user_filters as ufilters
import src.data.user_agg as uagg
import src.data.metrics as metrics
import src.data.process as data_process
import src.data.cells_results as cells_results
import src.visualization.grid_viz as grid_viz
import src.visualization.helpers as helpers_viz
from dotenv import load_dotenv
load_dotenv()
pd.reset_option("display.max_rows")
data_dir_path = os.environ['DATA_DIR']
tweets_files_format = 'tweets_{}_{}_{}.json.gz'
places_files_format = 'places_{}_{}_{}.json.gz'
ssh_domain = os.environ['IFISC_DOMAIN']
ssh_username = os.environ['IFISC_USERNAME']
fig_dir = os.path.join('..', 'reports', 'figures')
project_data_dir = os.path.join('..', 'data')
external_data_dir = os.path.join(project_data_dir, 'external')
interim_data_dir = os.path.join(project_data_dir, 'interim')
processed_data_dir = os.path.join(project_data_dir, 'processed')
cell_data_path_format = os.path.join(
processed_data_dir, '{0}', '{0}_cc={1}_r={2}_cell_size={3}m.{4}')
latlon_proj = 'epsg:4326'
LANGS_DICT = dict([(lang[1],lang[0].lower().capitalize())
for lang in pycld2.LANGUAGES])
country_codes = ('BE', 'BO', 'CA', 'CH', 'EE', 'ES', 'FR', 'HK', 'ID', 'LT',
'LV', 'MY', 'PE', 'RO', 'SG', 'TN', 'UA')
with open(os.path.join(external_data_dir, 'countries.json')) as f:
countries_study_data = json.load(f)
with open(os.path.join(external_data_dir, 'langs_agg.json')) as f:
langs_agg_dict = json.load(f)
# Country-specific parameters
cc = 'BE'
region = None #'New York City'
# region = 'Quebec'
# region = 'Cataluña'
area_dict = make_config.area_dict(countries_study_data, cc, region=region)
country_name = area_dict['readable']
cc_fig_dir = os.path.join(fig_dir, cc)
if not os.path.exists(cc_fig_dir):
os.makedirs(os.path.join(cc_fig_dir, 'count'))
os.makedirs(os.path.join(cc_fig_dir, 'prop'))
xy_proj = area_dict['xy_proj']
cc_timezone = area_dict['timezone']
plot_langs_list = area_dict['local_langs']
min_poly_area = area_dict.get('min_poly_area')
max_place_area = area_dict.get('max_place_area') or 1e9
valid_uids_path = os.path.join(interim_data_dir, f'valid_uids_{cc}_{country_name}.csv') | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Getting the data Places, area and grid | shapefile_dict = make_config.shapefile_dict(area_dict, cc, region=region)
shapefile_path = os.path.join(
external_data_dir, shapefile_dict['name'], shapefile_dict['name'])
shape_df = geopd.read_file(shapefile_path)
shape_df = geo.extract_shape(
shape_df, shapefile_dict, xy_proj=xy_proj, min_area=min_poly_area)
shape_df | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Places can be a point too -> treat them like tweets with coords in this case | places_files_paths = [
os.path.join(data_dir_path, places_files_format.format(2015, 2018, cc)),
os.path.join(data_dir_path, places_files_format.format(2019, 2019, cc))]
all_raw_places_df = []
for file in places_files_paths:
raw_places_df = data_access.return_json(file,
ssh_domain=ssh_domain, ssh_username=ssh_username, compression='gzip')
all_raw_places_df.append(
raw_places_df[['id', 'bounding_box', 'name', 'place_type']])
# We drop the duplicate places (based on their ID)
places_df = pd.concat(all_raw_places_df).drop_duplicates(subset='id')
places_geodf, places_in_xy = geo.make_places_geodf(places_df, shape_df,
xy_proj=xy_proj)
places_geodf.head()
from matplotlib.patches import Patch
# plt.rc('text', usetex=True)
# plt.rc('font', family='serif')
shape_df = geopd.read_file(shapefile_path)
shape_df = geo.extract_shape(
shape_df, shapefile_dict, xy_proj=xy_proj, min_area=min_poly_area)
mercator_proj = 'epsg:3857'
fig, ax = plt.subplots(1, figsize=(10, 6))
xlabel = 'position (km)'
ylabel = 'position (km)'
shape_df_mercator = shape_df.to_crs(mercator_proj)
area_df_bounds = shape_df_mercator.geometry.iloc[0].bounds
# We translate the whole geometries so that the origin (x,y) = (0,0) is
# located at the bottom left corner of the shape's bounding box.
x_off = -area_df_bounds[0]
y_off = -area_df_bounds[1]
shape_df_mercator.geometry = shape_df_mercator.translate(xoff=x_off, yoff=y_off)
shape_df_bounds = shape_df.geometry.iloc[0].bounds
# We translate the whole geometries so that the origin (x,y) = (0,0) is
# located at the bottom left corner of the shape's bounding box.
x_off = -(shape_df_bounds[2] + shape_df_bounds[0]) / 2 + (area_df_bounds[2] - area_df_bounds[0]) / 2 + 100e3
y_off = -(shape_df_bounds[3] + shape_df_bounds[1]) / 2 + (area_df_bounds[3] - area_df_bounds[1]) / 2 - 100e3
shape_df.geometry = shape_df.translate(xoff=x_off, yoff=y_off)
# The order here is important, the area's boundaries will be drawn on top
# of the choropleth, and the cells with null values will be in null_color
shape_df_mercator.plot(ax=ax, color='#587cf3', edgecolor='black')
shape_df.plot(ax=ax, color='#0833c1', edgecolor='black')
xticks_km = ax.get_xticks() / 1000
ax.set_xticklabels([f'{t:.0f}' for t in xticks_km])
yticks_km = ax.get_yticks() / 1000
ax.set_yticklabels([f'{t:.0f}' for t in yticks_km])
plt.xlabel(xlabel)
plt.ylabel(ylabel)
handles = [Patch(facecolor='#587cf3', label='EPSG:3857'),
Patch(facecolor='#0833c1', label='EPSG:3067')]
ax.legend(handles=handles, bbox_to_anchor=(1.05, 1), loc=2)
plt.savefig('mercator_finland.pdf', bbox_inches='tight')
plt.show()
plt.close()
cell_size = 10000
cells_df, cells_in_area_df, Nx, Ny = geo.create_grid(
shape_df, cell_size, xy_proj=xy_proj, intersect=True)
grid_test_df = cells_in_area_df.copy()
grid_test_df['metric'] = 1
# save_path = os.path.join(cc_fig_dir, f'grid_cc={cc}_cell_size={cell_size}m.pdf')
save_path = None
plot_kwargs = dict(alpha=0.7, edgecolor='w', linewidths=0.5, cmap='plasma')
ax = grid_viz.plot_grid(grid_test_df, shape_df, metric_col='metric', show=True,
save_path=save_path, xy_proj=xy_proj, **plot_kwargs)
cells_df, cells_in_area_df, Nx, Ny = geo.create_grid(
shape_df, cell_size, xy_proj=xy_proj, intersect=True, places_geodf=places_geodf)
import mplleaflet
cells_in_shape_df.to_crs(latlon_proj).plot(edgecolor='w', figsize=(6,10))
mplleaflet.display()
tweets_files_paths = [
os.path.join(data_dir_path, tweets_files_format.format(2015, 2019, cc))]
# os.path.join(data_dir_path, tweets_files_format.format(2019, 2019, cc))]
tweets_access_res = None | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Reading the data | def profile_pre_process(tweets_file_path, chunk_start, chunk_size):
cProfile.runctx(
'''data_access.read_data(
tweets_file_path, chunk_start, chunk_size, dfs_to_join=[places_geodf])''',
globals(), locals())
tweets_access_res = []
def collect_tweets_access_res(res):
global tweets_access_res
if res.shape[0] > 0:
tweets_access_res.append(res)
pool = mp.Pool(8)
for file_path in tweets_files_paths:
for chunk_start, chunk_size in data_access.chunkify(
file_path, size=1e9, ssh_domain=ssh_domain,
ssh_username=ssh_username):
args = (file_path, chunk_start, chunk_size)
kwargs = {'cols': ['text', 'id', 'lang', 'place_id', 'coordinates',
'uid', 'created_at', 'source'],
'dfs_to_join': [places_geodf]}
pool.apply_async(
data_access.read_data, args, kwargs, callback=collect_tweets_access_res)
pool.close()
pool.join()
tweets_access_res = data_process.post_multi(tweets_access_res)
tweeted_months = None
tweets_pb_months = None
first_day = datetime.datetime(year=2015, month=1, day=1)
for res in tweets_access_res:
tweets_df = res.copy()
tweets_df = tweets_df.loc[tweets_df['created_at'] > first_day]
tweets_df['month'] = tweets_df['created_at'].dt.to_period('M')
has_gps = tweets_df['coordinates'].notnull()
geometry = tweets_df.loc[has_gps, 'coordinates'].apply(
lambda x: Point(x['coordinates']))
tweets_coords = geopd.GeoSeries(geometry, crs=latlon_proj,
index=tweets_df.loc[has_gps].index)
tweets_df = tweets_df.join(places_geodf, on='place_id', how='left')
coords_in_place = tweets_coords.within(
geopd.GeoSeries(tweets_df.loc[has_gps, 'geometry']))
tweeted_months = join_and_count.increment_counts(
tweeted_months, tweets_df, ['month'])
tweets_pb_months = join_and_count.increment_counts(tweets_pb_months,
tweets_df.loc[has_gps].loc[~coords_in_place], ['month'])
months_counts = tweeted_months.join(tweets_pb_months, rsuffix='_pb', how='left')
months_counts['prop'] = months_counts['count_pb'] / months_counts['count']
ax = months_counts['prop'].plot.bar()
ticks = np.arange(0,60,5)
tick_labels = ax.get_xticklabels()
_ = ax.set_xticks(ticks)
_ = ax.set_xticklabels([tick_labels[i] for i in ticks])
_ = ax.set_ylabel('proportion')
_ = ax.set_title('Proportion of tweets with coords outside of place') | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Filtering out users Filters: user-based imply a loop over all the raw_tweets_df, and must be applied before getting tweets_lang_df and even tweets_loc_df, because these don't interest us at all. This filter requires us to loop over all files and aggregate the results to get the valid UIDs out | if tweets_access_res is None:
def get_df_fun(arg0):
return data_access.read_json_wrapper(*arg0)
else:
def get_df_fun(arg0):
return arg0
def chunk_users_months(df_access, get_df_fun, places_geodf,
cols=None, ref_year=2015):
raw_tweets_df = get_df_fun(df_access)
raw_tweets_df = data_access.filter_df(
raw_tweets_df, cols=cols, dfs_to_join=[places_geodf])
months_counts = uagg.users_months(raw_tweets_df, ref_year=ref_year)
return months_counts
users_months_res = []
def collect_users_months_res(res):
global users_months_res
if res.shape[0] > 0:
users_months_res.append(res)
pool = mp.Pool(8)
for df_access in data_access.yield_tweets_access(
tweets_files_paths, tweets_res=tweets_access_res):
args = (df_access, get_df_fun, places_geodf)
kwargs = {'cols': ['id', 'uid', 'created_at']}
pool.apply_async(
chunk_users_months, args, kwargs,
callback=collect_users_months_res, error_callback=print)
pool.close()
pool.join()
tweeted_months_users = join_and_count.init_counts(['uid', 'month'])
for res in users_months_res:
tweeted_months_users = join_and_count.increment_join(tweeted_months_users,
res)
tweeted_months_users = tweeted_months_users['count']
total_nr_users = len(tweeted_months_users.index.levels[0])
print(f'In total, there are {total_nr_users} distinct users in the whole dataset.')
local_uids = ufilters.consec_months(tweeted_months_users)
bot_uids = ufilters.bot_activity(tweeted_months_users)
# We have local_uids: index of uids with a column full of True, and bot_uids:
# index of uids with a column full of False. When we multiply them, the uids
# in local_uids which are not in bot_uids are assigned NaN, and the ones which
# are in bot_uids are assigned False. When we convert to the boolean type,
# the NaNs turn to True.
valid_uids = (local_uids * bot_uids).astype('bool').rename('valid')
valid_uids = valid_uids.loc[valid_uids]
print(f'This leaves us with {len(valid_uids)} valid users in the whole dataset.') | There are 66972 users with at least 3 months of activity in the dataset.
There are 36284 users considered local in the dataset, as they have been active for 3 consecutive months in this area at least once.
0 users have been found to be bots because of their excessive activity, tweeting more than 3 times per minute.
This leaves us with 36284 valid users in the whole dataset.
| RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Then we have to loop over all files once again to apply the speed filter, which is expensive, thus done last (we thus benefit from having some users already filtered out, so smaller tweets dataframes) | if tweets_access_res is None:
def get_df_fun(arg0):
return data_access.read_json_wrapper(*arg0)
else:
def get_df_fun(arg0):
return arg0
def speed_filter(df_access, get_df_fun, valid_uids, places_in_xy, max_distance,
cols=None):
tweets_df = get_df_fun(df_access)
tweets_df = data_access.filter_df(
tweets_df, cols=cols, dfs_to_join=[places_in_xy, valid_uids])
too_fast_uids = ufilters.too_fast(tweets_df, places_in_xy, max_distance)
return too_fast_uids
area_bounds = shape_df.to_crs(xy_proj).geometry.iloc[0].bounds
# Get an upper limit of the distance that can be travelled inside the area
max_distance = np.sqrt((area_bounds[0]-area_bounds[2])**2
+ (area_bounds[1]-area_bounds[3])**2)
cols = ['uid', 'created_at', 'place_id', 'coordinates']
too_fast_uids_list = []
def collect_too_fast_uids_list(res):
global too_fast_uids_list
if res.shape[0] > 0:
too_fast_uids_list.append(res)
pool = mp.Pool(8)
for df_access in data_access.yield_tweets_access(
tweets_files_paths, tweets_res=tweets_access_res):
args = (df_access, get_df_fun,
valid_uids, places_geodf, max_distance)
kwargs = {'cols': cols}
pool.apply_async(
speed_filter, args, kwargs, callback=collect_too_fast_uids_list,
error_callback=print)
pool.close()
pool.join()
too_fast_uids_series = pd.Series([])
too_fast_uids_series.index.name = 'uid'
for too_fast_uids in too_fast_uids_list:
too_fast_uids_series = (too_fast_uids_series * too_fast_uids).fillna(False)
print(f'In total, there are {len(too_fast_uids_series)} too fast users left to '
'filter out in the whole dataset.')
valid_uids = (valid_uids * too_fast_uids_series).astype('bool').rename('valid')
valid_uids = valid_uids.loc[valid_uids]
print(f'This leaves us with {len(valid_uids)} valid users in the whole dataset.')
valid_uids.to_csv(valid_uids_path, header=True) | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Processing We don't filter out tweets with a useless place (one too large) here, because these tweets can still be useful for language detection. So this filter is only applied later on. Similarly, we keep tweets with insufficient text to make a reliable language detection, because they can still be useful for residence attribution. | valid_uids = pd.read_csv(valid_uids_path, index_col='uid', header=0)
if tweets_access_res is None:
def get_df_fun(arg0):
return data_access.read_json_wrapper(*arg0)
else:
def get_df_fun(arg0):
return arg0
tweets_process_res = []
def collect_tweets_process_res(res):
global tweets_process_res
if res.shape[0] > 0:
tweets_process_res.append(res)
def access_and_process(df_access, get_df_fun, valid_uids, places_geodf,
langs_agg_dict, text_col='text', min_nr_words=4,
cld='pycld2', latlon_proj='epsg:4326'):
tweets_loc_df = get_df_fun(df_access)
cols = ['text', 'id', 'lang', 'place_id', 'coordinates', 'uid',
'created_at', 'source']
tweets_loc_df = data_access.filter_df(
tweets_loc_df, cols=cols, dfs_to_join=[places_geodf, valid_uids])
tweets_lang_df = data_process.process(
tweets_loc_df, places_geodf, langs_agg_dict,
min_nr_words=min_nr_words, cld=cld)
return tweets_lang_df
pool = mp.Pool(8)
for df_access in data_access.yield_tweets_access(
tweets_files_paths, tweets_res=tweets_access_res):
args = (df_access, get_df_fun,
valid_uids, places_geodf, langs_agg_dict)
kwargs = {'min_nr_words': 4, 'cld': 'pycld2'}
pool.apply_async(
access_and_process, args, kwargs, callback=collect_tweets_process_res,
error_callback=print)
pool.close()
pool.join()
tweets_process_res = data_process.post_multi(tweets_process_res) | /home/thomaslouf/Documents/code/multiling-twitter/.venv/lib/python3.6/site-packages/pandas/core/indexing.py:1418: FutureWarning:
Passing list-likes to .loc or [] with any missing label will raise
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike
| RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Study at the tweet level Make tweet counts data | tweet_level_label = 'tweets in {}'
plot_langs_dict = make_config.langs_dict(area_dict, tweet_level_label) | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Why sjoin so slow? It tests on every cell, even though it's exclusive: if one cell matches no other will. Solution: loop over cells, ordered by the counts obtained from places, and stop at first match, will greatly reduce the number of 'within' operations -> update: doesn't seem possible, deleting from spatial index is extremely slow | def get_langs_counts(tweets_lang_df, max_place_area, cells_in_area_df):
tweets_df = tweets_lang_df.copy()
relevant_area_mask = tweets_df['area'] < max_place_area
tweets_df = tweets_df.loc[relevant_area_mask]
# The following mask accounts for both tweets with GPS coordinates and
# tweets within places which are a point.
has_gps = tweets_df['area'] == 0
# Here the tweets with coordinates outside the grid are out, because of the
# inner join
tweets_cells_df = geopd.sjoin(tweets_df.loc[has_gps], cells_in_area_df,
op='within', rsuffix='cell', how='inner')
nr_out_tweets = len(tweets_df.loc[has_gps]) - len(tweets_cells_df)
print(f'{nr_out_tweets} tweets have been found outside of the grid and'
' filtered out as a result.')
tweets_places_df = tweets_df.loc[~has_gps]
return tweets_cells_df, tweets_places_df
with mp.Pool(8) as pool:
map_parameters = [(res, max_place_area, cells_in_area_df)
for res in tweets_process_res]
print('entering the loop')
tweets_pre_cell_res = (
pool.starmap_async(get_langs_counts, map_parameters).get())
cells_langs_counts = None
places_langs_counts = None
for res in tweets_pre_cell_res:
tweets_cells_df = res[0]
tweets_places_df = res[1]
groupby_cols = ['cld_lang', 'cell_id']
cells_langs_counts = join_and_count.increment_counts(
cells_langs_counts, tweets_cells_df, groupby_cols)
groupby_cols = ['cld_lang', 'place_id']
places_langs_counts = join_and_count.increment_counts(
places_langs_counts, tweets_places_df, groupby_cols)
places_langs_counts = places_langs_counts['count']
places_counts = (places_langs_counts.groupby('place_id')
.sum()
.rename('total_count')
.to_frame())
cells_langs_counts = cells_langs_counts['count']
cells_counts = (cells_langs_counts.groupby('cell_id')
.sum()
.rename('total_count')
.to_frame()) | entering the loop
109 tweets have been found outside of the grid and filtered out as a result.
123 tweets have been found outside of the grid and filtered out as a result.
119 tweets have been found outside of the grid and filtered out as a result.
149 tweets have been found outside of the grid and filtered out as a result.
120 tweets have been found outside of the grid and filtered out as a result.
129 tweets have been found outside of the grid and filtered out as a result.
180 tweets have been found outside of the grid and filtered out as a result.
127 tweets have been found outside of the grid and filtered out as a result.
148 tweets have been found outside of the grid and filtered out as a result.
103 tweets have been found outside of the grid and filtered out as a result.
100 tweets have been found outside of the grid and filtered out as a result.
111 tweets have been found outside of the grid and filtered out as a result.
96 tweets have been found outside of the grid and filtered out as a result.
101 tweets have been found outside of the grid and filtered out as a result.
95 tweets have been found outside of the grid and filtered out as a result.
83 tweets have been found outside of the grid and filtered out as a result.
80 tweets have been found outside of the grid and filtered out as a result.
59 tweets have been found outside of the grid and filtered out as a result.
110 tweets have been found outside of the grid and filtered out as a result.
168 tweets have been found outside of the grid and filtered out as a result.
161 tweets have been found outside of the grid and filtered out as a result.
97 tweets have been found outside of the grid and filtered out as a result.
116 tweets have been found outside of the grid and filtered out as a result.
160 tweets have been found outside of the grid and filtered out as a result.
130 tweets have been found outside of the grid and filtered out as a result.
197 tweets have been found outside of the grid and filtered out as a result.
220 tweets have been found outside of the grid and filtered out as a result.
123 tweets have been found outside of the grid and filtered out as a result.
134 tweets have been found outside of the grid and filtered out as a result.
143 tweets have been found outside of the grid and filtered out as a result.
128 tweets have been found outside of the grid and filtered out as a result.
110 tweets have been found outside of the grid and filtered out as a result.
147 tweets have been found outside of the grid and filtered out as a result.
181 tweets have been found outside of the grid and filtered out as a result.
166 tweets have been found outside of the grid and filtered out as a result.
195 tweets have been found outside of the grid and filtered out as a result.
171 tweets have been found outside of the grid and filtered out as a result.
156 tweets have been found outside of the grid and filtered out as a result.
124 tweets have been found outside of the grid and filtered out as a result.
148 tweets have been found outside of the grid and filtered out as a result.
163 tweets have been found outside of the grid and filtered out as a result.
119 tweets have been found outside of the grid and filtered out as a result.
133 tweets have been found outside of the grid and filtered out as a result.
97 tweets have been found outside of the grid and filtered out as a result.
192 tweets have been found outside of the grid and filtered out as a result.
128 tweets have been found outside of the grid and filtered out as a result.
104 tweets have been found outside of the grid and filtered out as a result.
131 tweets have been found outside of the grid and filtered out as a result.
115 tweets have been found outside of the grid and filtered out as a result.
138 tweets have been found outside of the grid and filtered out as a result.
135 tweets have been found outside of the grid and filtered out as a result.
144 tweets have been found outside of the grid and filtered out as a result.
130 tweets have been found outside of the grid and filtered out as a result.
115 tweets have been found outside of the grid and filtered out as a result.
130 tweets have been found outside of the grid and filtered out as a result.
101 tweets have been found outside of the grid and filtered out as a result.
108 tweets have been found outside of the grid and filtered out as a result.
149 tweets have been found outside of the grid and filtered out as a result.
168 tweets have been found outside of the grid and filtered out as a result.
125 tweets have been found outside of the grid and filtered out as a result.
132 tweets have been found outside of the grid and filtered out as a result.
140 tweets have been found outside of the grid and filtered out as a result.
104 tweets have been found outside of the grid and filtered out as a result.
144 tweets have been found outside of the grid and filtered out as a result.
134 tweets have been found outside of the grid and filtered out as a result.
130 tweets have been found outside of the grid and filtered out as a result.
165 tweets have been found outside of the grid and filtered out as a result.
122 tweets have been found outside of the grid and filtered out as a result.
129 tweets have been found outside of the grid and filtered out as a result.
157 tweets have been found outside of the grid and filtered out as a result.
114 tweets have been found outside of the grid and filtered out as a result.
107 tweets have been found outside of the grid and filtered out as a result.
125 tweets have been found outside of the grid and filtered out as a result.
169 tweets have been found outside of the grid and filtered out as a result.
225 tweets have been found outside of the grid and filtered out as a result.
383 tweets have been found outside of the grid and filtered out as a result.
387 tweets have been found outside of the grid and filtered out as a result.
163 tweets have been found outside of the grid and filtered out as a result.
271 tweets have been found outside of the grid and filtered out as a result.
216 tweets have been found outside of the grid and filtered out as a result.
239 tweets have been found outside of the grid and filtered out as a result.
251 tweets have been found outside of the grid and filtered out as a result.
244 tweets have been found outside of the grid and filtered out as a result.
170 tweets have been found outside of the grid and filtered out as a result.
244 tweets have been found outside of the grid and filtered out as a result.
178 tweets have been found outside of the grid and filtered out as a result.
169 tweets have been found outside of the grid and filtered out as a result.
223 tweets have been found outside of the grid and filtered out as a result.
215 tweets have been found outside of the grid and filtered out as a result.
225 tweets have been found outside of the grid and filtered out as a result.
210 tweets have been found outside of the grid and filtered out as a result.
220 tweets have been found outside of the grid and filtered out as a result.
195 tweets have been found outside of the grid and filtered out as a result.
125 tweets have been found outside of the grid and filtered out as a result.
157 tweets have been found outside of the grid and filtered out as a result.
168 tweets have been found outside of the grid and filtered out as a result.
107 tweets have been found outside of the grid and filtered out as a result.
173 tweets have been found outside of the grid and filtered out as a result.
137 tweets have been found outside of the grid and filtered out as a result.
119 tweets have been found outside of the grid and filtered out as a result.
108 tweets have been found outside of the grid and filtered out as a result.
136 tweets have been found outside of the grid and filtered out as a result.
160 tweets have been found outside of the grid and filtered out as a result.
103 tweets have been found outside of the grid and filtered out as a result.
114 tweets have been found outside of the grid and filtered out as a result.
149 tweets have been found outside of the grid and filtered out as a result.
95 tweets have been found outside of the grid and filtered out as a result.
101 tweets have been found outside of the grid and filtered out as a result.
108 tweets have been found outside of the grid and filtered out as a result.
92 tweets have been found outside of the grid and filtered out as a result.
84 tweets have been found outside of the grid and filtered out as a result.
107 tweets have been found outside of the grid and filtered out as a result.
110 tweets have been found outside of the grid and filtered out as a result.
145 tweets have been found outside of the grid and filtered out as a result.
96 tweets have been found outside of the grid and filtered out as a result.
144 tweets have been found outside of the grid and filtered out as a result.
111 tweets have been found outside of the grid and filtered out as a result.
218 tweets have been found outside of the grid and filtered out as a result.
187 tweets have been found outside of the grid and filtered out as a result.
157 tweets have been found outside of the grid and filtered out as a result.
169 tweets have been found outside of the grid and filtered out as a result.
140 tweets have been found outside of the grid and filtered out as a result.
102 tweets have been found outside of the grid and filtered out as a result.
130 tweets have been found outside of the grid and filtered out as a result.
130 tweets have been found outside of the grid and filtered out as a result.
106 tweets have been found outside of the grid and filtered out as a result.
101 tweets have been found outside of the grid and filtered out as a result.
105 tweets have been found outside of the grid and filtered out as a result.
89 tweets have been found outside of the grid and filtered out as a result.
97 tweets have been found outside of the grid and filtered out as a result.
107 tweets have been found outside of the grid and filtered out as a result.
117 tweets have been found outside of the grid and filtered out as a result.
112 tweets have been found outside of the grid and filtered out as a result.
106 tweets have been found outside of the grid and filtered out as a result.
108 tweets have been found outside of the grid and filtered out as a result.
114 tweets have been found outside of the grid and filtered out as a result.
106 tweets have been found outside of the grid and filtered out as a result.
88 tweets have been found outside of the grid and filtered out as a result.
107 tweets have been found outside of the grid and filtered out as a result.
109 tweets have been found outside of the grid and filtered out as a result.
128 tweets have been found outside of the grid and filtered out as a result.
133 tweets have been found outside of the grid and filtered out as a result.
199 tweets have been found outside of the grid and filtered out as a result.
221 tweets have been found outside of the grid and filtered out as a result.
161 tweets have been found outside of the grid and filtered out as a result.
165 tweets have been found outside of the grid and filtered out as a result.
193 tweets have been found outside of the grid and filtered out as a result.
196 tweets have been found outside of the grid and filtered out as a result.
185 tweets have been found outside of the grid and filtered out as a result.
311 tweets have been found outside of the grid and filtered out as a result.
149 tweets have been found outside of the grid and filtered out as a result.
158 tweets have been found outside of the grid and filtered out as a result.
181 tweets have been found outside of the grid and filtered out as a result.
185 tweets have been found outside of the grid and filtered out as a result.
300 tweets have been found outside of the grid and filtered out as a result.
236 tweets have been found outside of the grid and filtered out as a result.
250 tweets have been found outside of the grid and filtered out as a result.
314 tweets have been found outside of the grid and filtered out as a result.
237 tweets have been found outside of the grid and filtered out as a result.
447 tweets have been found outside of the grid and filtered out as a result.
301 tweets have been found outside of the grid and filtered out as a result.
266 tweets have been found outside of the grid and filtered out as a result.
240 tweets have been found outside of the grid and filtered out as a result.
262 tweets have been found outside of the grid and filtered out as a result.
182 tweets have been found outside of the grid and filtered out as a result.
90 tweets have been found outside of the grid and filtered out as a result.
131 tweets have been found outside of the grid and filtered out as a result.
116 tweets have been found outside of the grid and filtered out as a result.
82 tweets have been found outside of the grid and filtered out as a result.
151 tweets have been found outside of the grid and filtered out as a result.
121 tweets have been found outside of the grid and filtered out as a result.
126 tweets have been found outside of the grid and filtered out as a result.
139 tweets have been found outside of the grid and filtered out as a result.
100 tweets have been found outside of the grid and filtered out as a result.
105 tweets have been found outside of the grid and filtered out as a result.
129 tweets have been found outside of the grid and filtered out as a result.
160 tweets have been found outside of the grid and filtered out as a result.
121 tweets have been found outside of the grid and filtered out as a result.
141 tweets have been found outside of the grid and filtered out as a result.
120 tweets have been found outside of the grid and filtered out as a result.
21 tweets have been found outside of the grid and filtered out as a result.
19 tweets have been found outside of the grid and filtered out as a result.
95 tweets have been found outside of the grid and filtered out as a result.
218 tweets have been found outside of the grid and filtered out as a result.
77 tweets have been found outside of the grid and filtered out as a result.
217 tweets have been found outside of the grid and filtered out as a result.
168 tweets have been found outside of the grid and filtered out as a result.
214 tweets have been found outside of the grid and filtered out as a result.
267 tweets have been found outside of the grid and filtered out as a result.
211 tweets have been found outside of the grid and filtered out as a result.
193 tweets have been found outside of the grid and filtered out as a result.
245 tweets have been found outside of the grid and filtered out as a result.
196 tweets have been found outside of the grid and filtered out as a result.
169 tweets have been found outside of the grid and filtered out as a result.
156 tweets have been found outside of the grid and filtered out as a result.
129 tweets have been found outside of the grid and filtered out as a result.
81 tweets have been found outside of the grid and filtered out as a result.
54 tweets have been found outside of the grid and filtered out as a result.
87 tweets have been found outside of the grid and filtered out as a result.
68 tweets have been found outside of the grid and filtered out as a result.
74 tweets have been found outside of the grid and filtered out as a result.
76 tweets have been found outside of the grid and filtered out as a result.
51 tweets have been found outside of the grid and filtered out as a result.
55 tweets have been found outside of the grid and filtered out as a result.
35 tweets have been found outside of the grid and filtered out as a result.
66 tweets have been found outside of the grid and filtered out as a result.
66 tweets have been found outside of the grid and filtered out as a result.
75 tweets have been found outside of the grid and filtered out as a result.
64 tweets have been found outside of the grid and filtered out as a result.
53 tweets have been found outside of the grid and filtered out as a result.
66 tweets have been found outside of the grid and filtered out as a result.
81 tweets have been found outside of the grid and filtered out as a result.
54 tweets have been found outside of the grid and filtered out as a result.
74 tweets have been found outside of the grid and filtered out as a result.
60 tweets have been found outside of the grid and filtered out as a result.
67 tweets have been found outside of the grid and filtered out as a result.
91 tweets have been found outside of the grid and filtered out as a result.
53 tweets have been found outside of the grid and filtered out as a result.
78 tweets have been found outside of the grid and filtered out as a result.
235 tweets have been found outside of the grid and filtered out as a result.
69 tweets have been found outside of the grid and filtered out as a result.
77 tweets have been found outside of the grid and filtered out as a result.
56 tweets have been found outside of the grid and filtered out as a result.
76 tweets have been found outside of the grid and filtered out as a result.
62 tweets have been found outside of the grid and filtered out as a result.
90 tweets have been found outside of the grid and filtered out as a result.
108 tweets have been found outside of the grid and filtered out as a result.
98 tweets have been found outside of the grid and filtered out as a result.
75 tweets have been found outside of the grid and filtered out as a result.
66 tweets have been found outside of the grid and filtered out as a result.
65 tweets have been found outside of the grid and filtered out as a result.
57 tweets have been found outside of the grid and filtered out as a result.
41 tweets have been found outside of the grid and filtered out as a result.
42 tweets have been found outside of the grid and filtered out as a result.
36 tweets have been found outside of the grid and filtered out as a result.
37 tweets have been found outside of the grid and filtered out as a result.
44 tweets have been found outside of the grid and filtered out as a result.
69 tweets have been found outside of the grid and filtered out as a result.
49 tweets have been found outside of the grid and filtered out as a result.
| RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Places -> cells | # We count the number of users speaking a local language in each cell and place
# of residence.
local_langs = [lang for lang in plot_langs_dict]
places_local_counts = places_langs_counts.reset_index(level='cld_lang')
local_langs_mask = places_local_counts['cld_lang'].isin(local_langs)
places_local_counts = (places_local_counts.loc[local_langs_mask]
.groupby('place_id')['count']
.sum()
.rename('local_count'))
places_counts = places_counts.join(places_local_counts, how='left')
cells_local_counts = cells_langs_counts.reset_index(level='cld_lang')
local_langs_mask = cells_local_counts['cld_lang'].isin(local_langs)
cells_local_counts = (cells_local_counts.loc[local_langs_mask]
.groupby('cell_id')['count']
.sum()
.rename('local_count'))
cells_counts = cells_counts.join(cells_local_counts, how='left')
cell_plot_df = places_to_cells.get_counts(
places_counts, places_langs_counts, places_geodf,
cells_in_area_df, plot_langs_dict)
# We add the counts from the tweets with coordinates
cell_plot_df = join_and_count.increment_join(
cell_plot_df, cells_counts['total_count'], count_col='total_count')
cell_plot_df = join_and_count.increment_join(
cell_plot_df, cells_counts['local_count'], count_col='local_count')
cell_plot_df = cell_plot_df.loc[cell_plot_df['total_count'] > 0]
for plot_lang, lang_dict in plot_langs_dict.items():
lang_count_col = lang_dict['count_col']
cells_lang_counts = cells_langs_counts.xs(plot_lang).rename(lang_count_col)
cell_plot_df = join_and_count.increment_join(
cell_plot_df, cells_lang_counts, count_col=lang_count_col)
level_lang_label = tweet_level_label.format(lang_dict['readable'])
sum_lang = cell_plot_df[lang_count_col].sum()
print(f'There are {sum_lang:.0f} {level_lang_label}.')
cell_plot_df['cell_id'] = cell_plot_df.index
cell_data_path = cell_data_path_format.format('tweets', cc, cell_size)
cell_plot_df.to_file(cell_data_path, driver='GeoJSON') | There are 9010159 tweets in Spanish.
There are 5035352 tweets in Catalan.
| RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Plots | # cell_size = 20000
cell_data_path = cell_data_path_format.format('tweets', cc, cell_size)
cell_plot_df = geopd.read_file(cell_data_path)
cell_plot_df.index = cell_plot_df['cell_id']
cell_plot_df, plot_langs_dict = metrics.calc_by_cell(cell_plot_df, plot_langs_dict)
for plot_lang, plot_dict in plot_langs_dict.items():
count_lang_col = plot_dict['count_col']
readable_lang = plot_dict['readable']
save_path = os.path.join(cc_fig_dir, 'count',
f'tweet_counts_cc={cc}_lang={plot_lang}_cell_size={cell_size}m.pdf')
plot_title = f'Distribution of {readable_lang} speakers in {country_name}'
cbar_label = plot_dict['count_label']
plot_kwargs = dict(edgecolor='w', linewidths=0.2, cmap='Purples')
ax_count = grid_viz.plot_grid(
cell_plot_df, shape_df, metric_col=count_lang_col, save_path=save_path,
show=False, log_scale=True, title=plot_title, cbar_label=cbar_label,
xy_proj=xy_proj, **plot_kwargs)
prop_lang_col = plot_dict['prop_col']
save_path = os.path.join(cc_fig_dir, 'prop',
f'tweets_prop_cc={cc}_lang={plot_lang}_cell_size={cell_size}m.pdf')
plot_title = '{} predominance in {}'.format(readable_lang, country_name)
cbar_label = plot_dict['prop_label']
# Avoid sequential colormaps starting or ending with white, as white is
# reserved for an absence of data
plot_kwargs = dict(edgecolor='w', linewidths=0.2, cmap='plasma')
ax_prop = grid_viz.plot_grid(
cell_plot_df, shape_df, metric_col=prop_lang_col, save_path=save_path,
title=plot_title, cbar_label=cbar_label, vmin=0, vmax=1, xy_proj=xy_proj,
**plot_kwargs)
save_path = os.path.join(cc_fig_dir,
f'tweets_prop_cc={cc}_cell_size={cell_size}m.html')
prop_dict = {'name': 'prop', 'readable': 'proportion', 'vmin': 0, 'vmax': 1}
fig = grid_viz.plot_interactive(
cell_plot_df, shape_df, plot_langs_dict, prop_dict,
save_path=save_path, plotly_renderer='iframe_connected', show=True) | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Study at the user level Users who have tagged their tweets with gps coordinates seem to do it regularly, as the median of the proportion of tweets they geo tag is at more than 75% on the first chunk -> it's worth it to try and get their cell of residence | a = tweets_process_res[0].copy()
a['has_gps'] = a['area'] == 0
gps_uids = a.loc[a['has_gps'], 'uid'].unique()
a = a.loc[a['uid'].isin(gps_uids)].groupby(['uid', 'has_gps']).size().rename('count').to_frame()
a = a.join(a.groupby('uid')['count'].sum().rename('sum'))
b = a.reset_index()
b = b.loc[b['has_gps']]
b['ratio'] = b['count'] / b['sum']
b['ratio'].describe() | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
If there's one or more cells where a user tweeted in proportion more than relevant_th of the time, we take among these cells the one where they tweeted the most outside work hours. Otherwise, we take the relevant place where they tweeted the most outside work hours, or we default to the place where they tweeted the most. | user_level_label = '{}-speaking users'
lang_relevant_prop = 0.1
lang_relevant_count = 5
cell_relevant_th = 0.1
plot_langs_dict = make_config.langs_dict(area_dict, user_level_label) | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
If valid_uids is already generated, we only loop once over the tweets df and do the whole processing in one go on each file, thus keeping very little in memory | valid_uids = pd.read_csv(valid_uids_path, index_col='uid', header=0)
cells_df_list = [cells_in_area_df]
if tweets_access_res is None:
def get_df_fun(arg0):
return data_access.read_json_wrapper(*arg0)
else:
def get_df_fun(arg0):
return arg0
user_agg_res = []
def collect_user_agg_res(res):
global user_agg_res
user_agg_res.append(res)
pool = mp.Pool(8)
for df_access in data_access.yield_tweets_access(tweets_files_paths):
args = (df_access, get_df_fun, valid_uids, places_geodf, langs_agg_dict,
cells_df_list, max_place_area, cc_timezone)
kwargs = {'min_nr_words': 4, 'cld': 'pycld2'}
pool.apply_async(
uagg.get_lang_loc_habits, args, kwargs, callback=collect_user_agg_res,
error_callback=print)
pool.close()
pool.join()
user_langs_counts = join_and_count.init_counts(['uid', 'cld_lang'])
user_cells_habits = join_and_count.init_counts(['uid', 'cell_id',
'isin_workhour'])
user_places_habits = join_and_count.init_counts(['uid', 'place_id',
'isin_workhour'])
for lang_res, cell_res, place_res in user_agg_res:
user_langs_counts = join_and_count.increment_join(user_langs_counts,
lang_res)
user_cells_habits = join_and_count.increment_join(user_cells_habits,
cell_res[0])
user_places_habits = join_and_count.increment_join(user_places_habits,
place_res) | 1000MB read, 846686 tweets unpacked.
487136 tweets remaining after filters.
1000MB read, 852716 tweets unpacked.
477614 tweets remaining after filters.
1000MB read, 844912 tweets unpacked.
starting lang detect
471237 tweets remaining after filters.
starting lang detect
1000MB read, 841957 tweets unpacked.
starting lang detect
493881 tweets remaining after filters.
starting lang detect
chunk lang detect done
1000MB read, 839053 tweets unpacked.
chunk lang detect done
chunk lang detect done
475712 tweets remaining after filters.
1000MB read, 846221 tweets unpacked.
479240 tweets remaining after filters.
1000MB read, 831641 tweets unpacked.
chunk lang detect done
447460 tweets remaining after filters.
starting lang detect
starting lang detect
starting lang detect
1000MB read, 819327 tweets unpacked.
464618 tweets remaining after filters.
115.6MB read, 94907 tweets unpacked.
48148 tweets remaining after filters.
starting lang detect
chunk lang detect done
chunk lang detect done
1000MB read, 810121 tweets unpacked.
chunk lang detect done
chunk lang detect done
starting lang detect
457673 tweets remaining after filters.
1000MB read, 813545 tweets unpacked.
starting lang detect
440606 tweets remaining after filters.
starting lang detect
chunk lang detect done
chunk lang detect done
798.3MB read, 649307 tweets unpacked.
372235 tweets remaining after filters.
chunk lang detect done
starting lang detect
chunk lang detect done
| RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Language(s) attribution very few users are actually filtered out by language attribution: not more worth it to generate user_langs_counts, user_cells_habits and user_places_habits inside of tweets_lang_df loop, so as to drop tweets_langs_df, and only return these user level, lightweight DFs Here we get rid of users whose language we couldn't identify | # Residence attribution is the longest to run, and by a long shot, so we'll start
# with language to filter out uids in tweets_df before doing it
groupby_cols = ['uid', 'cld_lang']
user_langs_counts = None
for res in tweets_process_res:
tweets_lang_df = res.copy()
# Here we don't filter out based on max_place_area, because these tweets
# are still useful for language attribution.
tweets_lang_df = tweets_lang_df.loc[tweets_lang_df['cld_lang'].notnull()]
user_langs_counts = join_and_count.increment_counts(
user_langs_counts, tweets_lang_df, groupby_cols)
user_langs_agg = uagg.get_lang_grp(user_langs_counts, area_dict,
lang_relevant_prop=lang_relevant_prop,
lang_relevant_count=lang_relevant_count,
fig_dir=fig_dir, show_fig=True) | We were able to attribute at least one language to 33779 users
| RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Attribute users to a group: mono, bi, tri, ... lingualProblem: need more tweets to detect multilingualism, eg users with only three tweets in the dataset are very unlikely to be detected as multilinguals | users_ling_grp = uagg.get_ling_grp(
user_langs_agg, area_dict, lang_relevant_prop=lang_relevant_prop,
lang_relevant_count=lang_relevant_count, fig_dir=fig_dir, show_fig=True) | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Pre-residence attribution | with mp.Pool(8) as pool:
map_parameters = [(res, cells_in_area_df,
max_place_area, cc_timezone)
for res in tweets_process_res]
print('entering the loop')
tweets_pre_resid_res = (
pool.starmap_async(data_process.prep_resid_attr, map_parameters).get())
user_places_habits = None
user_cells_habits = None
for res in tweets_pre_resid_res:
# We first count the number of times a user has tweeted in each place inside
# and outside work hours.
tweets_places_df = res[1]
groupby_cols = ['uid', 'place_id', 'isin_workhour']
user_places_habits = join_and_count.increment_counts(
user_places_habits, tweets_places_df, groupby_cols)
# Then we do the same thing except in each cell, using the tweets with
# coordinates.
tweets_cells_df = res[0]
groupby_cols = ['uid', 'cell_id', 'isin_workhour']
user_cells_habits = join_and_count.increment_counts(
user_cells_habits, tweets_cells_df, groupby_cols) | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Here we took number of speakers, whether they're multilingual or monolingual, if they speak a language, they count as one in that language's count Residence attribution | user_home_cell, user_only_place = uagg.get_residence(
user_cells_habits, user_places_habits, place_relevant_th=cell_relevant_th,
cell_relevant_th=cell_relevant_th) | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Generate cell data | cell_plot_df = data_process.from_users_area_and_lang(
cells_in_area_df, places_geodf, user_only_place,
user_home_cell, user_langs_agg, users_ling_grp,
plot_langs_dict, multiling_grps, cell_data_path_format) | There are 9012 German-speaking users.
There are 7927 French-speaking users.
There are 1887 Italian-speaking users.
| RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
GeoJSON should always be in lat, lon, WGS84 to be read by external programs, so in plotly for instance we need to make sure we come back to latlon_proj Plots | cell_size = 10000
cell_data_path = cell_data_path_format.format(
'users_cell_data', cc, cell_size, 'geojson')
cell_plot_df = geopd.read_file(cell_data_path)
cell_plot_df.index = cell_plot_df['cell_id']
cell_plot_df, plot_langs_dict = metrics.calc_by_cell(cell_plot_df,
plot_langs_dict)
prop_dict = {'name': 'prop', 'readable': 'Proportion', 'log_scale': False,
'vmin': 0, 'vmax': 1, 'total_count_col': 'local_count'}
metric = prop_dict['name']
save_path_format = os.path.join(
cc_fig_dir, metric,
f'users_{metric}_cc={cc}_grp={{grp}}_cell_size={cell_size}m.pdf')
ax = helpers_viz.metric_grid(
cell_plot_df, prop_dict, shape_df, plot_langs_dict, country_name,
cmap='plasma', save_path_format=save_path_format, xy_proj=xy_proj,
min_count=0, null_color='k')
save_path = os.path.join(cc_fig_dir,
f'users_prop_cc={cc}_cell_size={cell_size}m.html')
prop_dict = {'name': 'prop', 'readable': 'Proportion', 'log_scale': False,
'vmin': 0, 'vmax': 1, 'total_count_col': 'local_count'}
fig = grid_viz.plot_interactive(
cell_plot_df, shape_df, plot_langs_dict, prop_dict,
save_path=save_path, plotly_renderer='iframe_connected', show=True) | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Generate cell data files in loops In all the above, the cell size and cc are supposed constant, defined in config. Here we first assume the cell size is not constant, then the cc | import sys
import logging
import logging.config
import traceback
import IPython
# logger = logging.getLogger(__name__)
# load config from file
logging.config.fileConfig('logging.ini', disable_existing_loggers=False)
def showtraceback(self):
traceback_lines = traceback.format_exception(*sys.exc_info())
del traceback_lines[1]
message = ''.join(traceback_lines)
logging.error(message)
# sys.stderr.write(message)
IPython.core.interactiveshell.InteractiveShell.showtraceback = showtraceback
tweets_files_format = 'tweets_{}_{}_{}.json.gz'
places_files_format = 'places_{}_{}_{}.json.gz'
source_data_dir = os.environ['DATA_DIR']
fig_dir = os.path.join('..', 'reports', 'figures')
project_data_dir = os.path.join('..', 'data')
external_data_dir = os.path.join(project_data_dir, 'external')
interim_data_dir = os.path.join(project_data_dir, 'interim')
processed_data_dir = os.path.join(project_data_dir, 'processed')
cell_data_path_format = os.path.join(
processed_data_dir, '{0}', '{0}_cc={1}_r={2}_cell_size={3}m.{4}')
null_reply_id = 'e39d05b72f25767869d44391919434896bb055772d7969f74472032b03bc18418911f3b0e6dd47ff8f3b2323728225286c3cb36914d28dc7db40bdd786159c0a'
with open(os.path.join(external_data_dir, 'countries.json')) as f:
countries_study_data = json.load(f)
with open(os.path.join(external_data_dir, 'langs_agg.json')) as f:
langs_agg_dict = json.load(f) | _____no_output_____ | RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
Countries loop | cc = 'PY'
regions = ()
# regions = ('New York City', 'Puerto Rico')
# regions = ('Catalonia', 'Balearic islands', 'Galicia', 'Valencian Community',
# 'Basque country')
# regions = ('Louisiana', 'Texas', 'New Mexico', 'Arizona', 'Nevada',
# 'California')
valid_uids_path_format = os.path.join(interim_data_dir, 'valid_uids_{}_{}.csv')
areas_dict = {'cc': cc, 'regions': {}}
if not regions:
areas_dict['regions'] = {cc: countries_study_data[cc]}
for r in regions:
areas_dict['regions'][r] = countries_study_data[cc]['regions'][r]
cell_sizes_list = [30000, 40000]
data_years = [(2015, 2019)]
tweets_files_paths = [
os.path.join(source_data_dir,
tweets_files_format.format(year_from, year_to, cc))
for year_from, year_to in data_years]
places_files_paths = [
os.path.join(source_data_dir,
places_files_format.format(year_from, year_to, cc))
for year_from, year_to in data_years]
lang_relevant_prop = 0.1
lang_relevant_count = 5
cell_relevant_th = 0.1
def get_df_fun(arg0):
return data_access.read_json_wrapper(*arg0)
areas_dict = geo.init_cc(
areas_dict, cell_sizes_list, places_files_paths, project_data_dir)
filters_pass_res = []
def collect_filters_pass_res(res):
global filters_pass_res
filters_pass_res.append(res)
areas_dict = ufilters.get_valid_uids(
areas_dict, get_df_fun, collect_filters_pass_res,
filters_pass_res, tweets_files_paths, cpus=8)
for region, region_dict in areas_dict['regions'].items():
valid_uids = region_dict['valid_uids']
valid_uids_path = valid_uids_path_format.format(areas_dict['cc'],
region_dict['readable'])
valid_uids.to_csv(valid_uids_path, header=True)
for region, region_dict in areas_dict['regions'].items():
valid_uids_path = valid_uids_path_format.format(areas_dict['cc'],
region_dict['readable'])
valid_uids = pd.read_csv(valid_uids_path, index_col='uid', header=0)
areas_dict['regions'][region]['valid_uids'] = valid_uids
user_agg_res = []
def collect_user_agg_res(res):
global user_agg_res
user_agg_res.append(res)
cells_results.from_scratch(
areas_dict,
tweets_files_paths, get_df_fun, collect_user_agg_res,
user_agg_res, langs_agg_dict, cell_data_path_format, null_reply_id,
lang_relevant_prop=0.1, lang_relevant_count=5, cell_relevant_th=0.1,
place_relevant_th=0.1, fig_dir=fig_dir) | 2020-06-04 11:38:43,882 - src.data.cells_results - INFO - starting on chunk 0
2020-06-04 11:38:53,009 - src.data.cells_results - INFO - starting on chunk 1
2020-06-04 11:39:02,469 - src.data.cells_results - INFO - starting on chunk 2
2020-06-04 11:39:11,418 - src.data.cells_results - INFO - starting on chunk 3
2020-06-04 11:39:20,707 - src.data.cells_results - INFO - starting on chunk 4
2020-06-04 11:39:27,000 - src.data.access - INFO - 1000MB read, 1045895 tweets unpacked.
2020-06-04 11:39:30,043 - src.data.cells_results - INFO - starting on chunk 5
2020-06-04 11:39:36,508 - src.data.access - INFO - 685997 tweets remaining after filters.
2020-06-04 11:39:39,542 - src.data.cells_results - INFO - starting on chunk 6
2020-06-04 11:39:48,955 - src.data.cells_results - INFO - starting on chunk 7
2020-06-04 11:39:55,285 - src.data.access - INFO - 1000MB read, 1028962 tweets unpacked.
2020-06-04 11:39:58,725 - src.data.cells_results - INFO - starting on chunk 8
2020-06-04 11:40:04,441 - src.data.access - INFO - 689652 tweets remaining after filters.
2020-06-04 11:40:09,551 - src.data.cells_results - INFO - starting on chunk 9
2020-06-04 11:40:12,718 - src.data.access - INFO - 1000MB read, 1047956 tweets unpacked.
2020-06-04 11:40:19,169 - src.data.access - INFO - 1000MB read, 1037459 tweets unpacked.
2020-06-04 11:40:19,395 - src.data.cells_results - INFO - starting on chunk 10
2020-06-04 11:40:20,772 - src.data.access - INFO - 736129 tweets remaining after filters.
2020-06-04 11:40:23,005 - src.data.access - INFO - 399167 tweets remaining after filters.
starting lang detect
2020-06-04 11:40:29,337 - src.data.cells_results - INFO - starting on chunk 11
2020-06-04 11:40:39,673 - src.data.cells_results - INFO - starting on chunk 12
2020-06-04 11:40:48,276 - src.data.access - INFO - 1000MB read, 1034588 tweets unpacked.
starting lang detect
2020-06-04 11:40:49,590 - src.data.cells_results - INFO - starting on chunk 13
2020-06-04 11:40:55,487 - src.data.access - INFO - 806720 tweets remaining after filters.
2020-06-04 11:40:59,996 - src.data.cells_results - INFO - starting on chunk 14
2020-06-04 11:41:09,973 - src.data.cells_results - INFO - starting on chunk 15
chunk lang detect done
chunk lang detect done
2020-06-04 11:41:12,302 - src.data.cells_results - INFO - starting on chunk 16
2020-06-04 11:41:14,339 - src.data.access - INFO - 1000MB read, 1029088 tweets unpacked.
starting lang detect
2020-06-04 11:41:21,664 - src.data.access - INFO - 1000MB read, 1027359 tweets unpacked.
starting lang detect
2020-06-04 11:41:28,232 - src.data.access - INFO - 786160 tweets remaining after filters.
2020-06-04 11:41:32,904 - src.data.access - INFO - 787053 tweets remaining after filters.
2020-06-04 11:41:37,174 - src.data.access - INFO - 1000MB read, 1023918 tweets unpacked.
starting lang detect
2020-06-04 11:41:42,815 - src.data.access - INFO - 506928 tweets remaining after filters.
chunk lang detect done
starting lang detect
chunk lang detect done
chunk lang detect done
starting lang detect
starting lang detect
chunk lang detect done
chunk lang detect done
chunk lang detect done
2020-06-04 11:44:08,137 - src.data.access - INFO - 1000MB read, 1029333 tweets unpacked.
2020-06-04 11:44:16,766 - src.data.access - INFO - 817965 tweets remaining after filters.
starting lang detect
2020-06-04 11:45:03,610 - src.data.access - INFO - 1000MB read, 1031636 tweets unpacked.
2020-06-04 11:45:14,222 - src.data.access - INFO - 789319 tweets remaining after filters.
2020-06-04 11:45:42,423 - src.data.access - INFO - 1000MB read, 1024005 tweets unpacked.
chunk lang detect done
2020-06-04 11:45:50,486 - src.data.access - INFO - 782099 tweets remaining after filters.
starting lang detect
2020-06-04 11:46:11,575 - src.data.access - INFO - 1000MB read, 1007675 tweets unpacked.
2020-06-04 11:46:18,973 - src.data.access - INFO - 806602 tweets remaining after filters.
starting lang detect
2020-06-04 11:46:32,714 - src.data.access - INFO - 1000MB read, 999877 tweets unpacked.
2020-06-04 11:46:37,413 - src.data.access - INFO - 787906 tweets remaining after filters.
starting lang detect
chunk lang detect done
starting lang detect
chunk lang detect done
chunk lang detect done
2020-06-04 11:47:30,941 - src.data.access - INFO - 1000MB read, 1001811 tweets unpacked.
2020-06-04 11:47:37,340 - src.data.access - INFO - 792710 tweets remaining after filters.
2020-06-04 11:47:46,093 - src.data.access - INFO - 1000MB read, 1001733 tweets unpacked.
2020-06-04 11:47:52,697 - src.data.access - INFO - 791140 tweets remaining after filters.
chunk lang detect done
starting lang detect
2020-06-04 11:48:11,068 - src.data.access - INFO - 1000MB read, 1001287 tweets unpacked.
starting lang detect
2020-06-04 11:48:16,768 - src.data.access - INFO - 816335 tweets remaining after filters.
starting lang detect
chunk lang detect done
chunk lang detect done
2020-06-04 11:49:26,586 - src.data.access - INFO - 220.2MB read, 221086 tweets unpacked.
2020-06-04 11:49:27,570 - src.data.access - INFO - 166977 tweets remaining after filters.
starting lang detect
chunk lang detect done
chunk lang detect done
We were able to attribute at least one language to 48204 users
2020-06-04 11:51:14,318 - src.data.cells_results - INFO - lang attribution done
2020-06-04 11:51:26,374 - src.data.cells_results - INFO - There are 45737 Spanish-speaking users.
2020-06-04 11:51:26,376 - src.data.cells_results - INFO - There are 3986 Guarani-speaking users.
2020-06-04 11:51:26,377 - src.data.cells_results - INFO - There are 4571 Portuguese-speaking users.
2020-06-04 11:51:26,378 - src.data.cells_results - INFO - saving at ../data/processed/users_cell_data/users_cell_data_cc=PY_r=Paraguay_cell_size=30000m.geojson.
2020-06-04 11:51:26,391 - fiona._env - ERROR - ../data/processed/users_cell_data/users_cell_data_cc=PY_r=Paraguay_cell_size=30000m.geojson: No such file or directory
2020-06-04 11:51:26,393 - fiona._env - WARNING - driver GeoJSON does not support creation option ENCODING
2020-06-04 11:51:37,003 - src.data.cells_results - INFO - There are 45743 Spanish-speaking users.
2020-06-04 11:51:37,006 - src.data.cells_results - INFO - There are 3986 Guarani-speaking users.
2020-06-04 11:51:37,007 - src.data.cells_results - INFO - There are 4571 Portuguese-speaking users.
2020-06-04 11:51:37,007 - src.data.cells_results - INFO - saving at ../data/processed/users_cell_data/users_cell_data_cc=PY_r=Paraguay_cell_size=40000m.geojson.
2020-06-04 11:51:37,019 - fiona._env - ERROR - ../data/processed/users_cell_data/users_cell_data_cc=PY_r=Paraguay_cell_size=40000m.geojson: No such file or directory
2020-06-04 11:51:37,020 - fiona._env - WARNING - driver GeoJSON does not support creation option ENCODING
| RSA-MD | notebooks/1.1.first_whole_analysis.ipynb | TLouf/multiling-twitter |
EvalutaionTo be able to make a statement about the performance of a question-asnwering system, it is important to evalute it. Furthermore, evaluation allows to determine which parts of the system can be improved. Start an Elasticsearch serverYou can start Elasticsearch on your local machine instance using Docker. If Docker is not readily available in your environment (eg., in Colab notebooks), then you can manually download and execute Elasticsearch from source. | # Recommended: Start Elasticsearch using Docker
#! docker run -d -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.6.2
# In Colab / No Docker environments: Start Elasticsearch from source
! wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-linux-x86_64.tar.gz -q
! tar -xzf elasticsearch-7.6.2-linux-x86_64.tar.gz
! chown -R daemon:daemon elasticsearch-7.6.2
import os
from subprocess import Popen, PIPE, STDOUT
es_server = Popen(['elasticsearch-7.6.2/bin/elasticsearch'],
stdout=PIPE, stderr=STDOUT,
preexec_fn=lambda: os.setuid(1) # as daemon
)
# wait until ES has started
! sleep 30
# install haystack
! pip install git+git://github.com/deepset-ai/haystack.git@fix_tutorial_5
from farm.utils import initialize_device_settings
device, n_gpu = initialize_device_settings(use_cuda=True)
from haystack.indexing.io import fetch_archive_from_http
# Download evaluation data, which is a subset of Natural Questions development set containing 50 documents
doc_dir = "../data/nq"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/nq_dev_subset.json.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
# Connect to Elasticsearch
from haystack.database.elasticsearch import ElasticsearchDocumentStore
document_store = ElasticsearchDocumentStore(host="localhost", username="", password="", create_index=False)
# Add evaluation data to Elasticsearch database
document_store.add_eval_data("../data/nq/nq_dev_subset.json") | 06/05/2020 16:11:30 - INFO - elasticsearch - POST http://localhost:9200/_bulk [status:200 request:1.613s]
06/05/2020 16:11:31 - INFO - elasticsearch - POST http://localhost:9200/_bulk [status:200 request:0.453s]
| Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | arthurbarros/haystack |
Initialize components of QA-System | # Initialize Retriever
from haystack.retriever.elasticsearch import ElasticsearchRetriever
retriever = ElasticsearchRetriever(document_store=document_store)
# Initialize Reader
from haystack.reader.farm import FARMReader
reader = FARMReader("deepset/roberta-base-squad2")
# Initialize Finder which sticks together Reader and Retriever
from haystack.finder import Finder
finder = Finder(reader, retriever) | _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | arthurbarros/haystack |
Evaluation of Retriever | # Evaluate Retriever on its own
retriever_eval_results = retriever.eval()
## Retriever Recall is the proportion of questions for which the correct document containing the answer is
## among the correct documents
print("Retriever Recall:", retriever_eval_results["recall"])
## Retriever Mean Avg Precision rewards retrievers that give relevant documents a higher rank
print("Retriever Mean Avg Precision:", retriever_eval_results["map"]) | 06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/feedback/_search?scroll=5m&size=1000 [status:200 request:0.170s]
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.069s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.022s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.021s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.019s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.027s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.026s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.015s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.024s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.017s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.014s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.017s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:46 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.013s]
06/05/2020 16:12:46 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.016s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.016s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.017s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.017s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.017s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.012s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.012s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.013s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.013s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.013s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.008s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.015s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.015s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.011s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.015s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.010s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.011s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.015s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.014s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.012s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.010s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.014s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.015s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.009s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.009s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.013s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.013s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.010s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.010s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.009s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.010s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.009s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.010s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.011s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.016s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.013s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.019s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.012s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.017s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.018s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.013s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search [status:200 request:0.015s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - Got 10 candidates from retriever
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/_search/scroll [status:200 request:0.017s]
06/05/2020 16:12:47 - INFO - elasticsearch - DELETE http://localhost:9200/_search/scroll [status:200 request:0.007s]
06/05/2020 16:12:47 - INFO - haystack.retriever.elasticsearch - For 54 out of 54 questions (100.00%), the answer was in the top-10 candidate passages selected by the retriever.
| Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | arthurbarros/haystack |
Evaluation of Reader | # Evaluate Reader on its own
reader_eval_results = reader.eval(document_store=document_store, device=device)
# Evaluation of Reader can also be done directly on a SQuAD-formatted file
# without passing the data to Elasticsearch
#reader_eval_results = reader.eval_on_file("../data/natural_questions", "dev_subset.json", device=device)
## Reader Top-N-Recall is the proportion of predicted answers that overlap with their corresponding correct answer
print("Reader Top-N-Recall:", reader_eval_results["top_n_recall"])
## Reader Exact Match is the proportion of questions where the predicted answer is exactly the same as the correct answer
print("Reader Exact Match:", reader_eval_results["EM"])
## Reader F1-Score is the average overlap between the predicted answers and the correct answers
print("Reader F1-Score:", reader_eval_results["f1"]) | 06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/feedback/_search?scroll=5m&size=1000 [status:200 request:0.022s]
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/_search/scroll [status:200 request:0.005s]
06/05/2020 16:12:47 - INFO - elasticsearch - DELETE http://localhost:9200/_search/scroll [status:200 request:0.003s]
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/eval_document/_search?scroll=5m&size=1000 [status:200 request:0.039s]
06/05/2020 16:12:47 - INFO - elasticsearch - POST http://localhost:9200/_search/scroll [status:200 request:0.010s]
06/05/2020 16:12:47 - INFO - elasticsearch - DELETE http://localhost:9200/_search/scroll [status:200 request:0.003s]
Evaluating: 100%|██████████| 78/78 [00:31<00:00, 2.50it/s]
| Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | arthurbarros/haystack |
Evaluation of Finder | # Evaluate combination of Reader and Retriever through Finder
finder_eval_results = finder.eval()
print("\n___Retriever Metrics in Finder___")
print("Retriever Recall:", finder_eval_results["retriever_recall"])
print("Retriever Mean Avg Precision:", finder_eval_results["retriever_map"])
# Reader is only evaluated with those questions, where the correct document is among the retrieved ones
print("\n___Reader Metrics in Finder___")
print("Reader Top-1 accuracy:", finder_eval_results["reader_top1_accuracy"])
print("Reader Top-1 accuracy (has answer):", finder_eval_results["reader_top1_accuracy_has_answer"])
print("Reader Top-k accuracy:", finder_eval_results["reader_top_k_accuracy"])
print("Reader Top-k accuracy (has answer):", finder_eval_results["reader_topk_accuracy_has_answer"])
print("Reader Top-1 EM:", finder_eval_results["reader_top1_em"])
print("Reader Top-1 EM (has answer):", finder_eval_results["reader_top1_em_has_answer"])
print("Reader Top-k EM:", finder_eval_results["reader_topk_em"])
print("Reader Top-k EM (has answer):", finder_eval_results["reader_topk_em_has_answer"])
print("Reader Top-1 F1:", finder_eval_results["reader_top1_f1"])
print("Reader Top-1 F1 (has answer):", finder_eval_results["reader_top1_f1_has_answer"])
print("Reader Top-k F1:", finder_eval_results["reader_topk_f1"])
print("Reader Top-k F1 (has answer):", finder_eval_results["reader_topk_f1_has_answer"])
print("Reader Top-1 no-answer accuracy:", finder_eval_results["reader_top1_no_answer_accuracy"])
print("Reader Top-k no-answer accuracy:", finder_eval_results["reader_topk_no_answer_accuracy"])
# Time measurements
print("\n___Time Measurements___")
print("Total retrieve time:", finder_eval_results["total_retrieve_time"])
print("Avg retrieve time per question:", finder_eval_results["avg_retrieve_time"])
print("Total reader timer:", finder_eval_results["total_reader_time"])
print("Avg read time per question:", finder_eval_results["avg_reader_time"])
print("Total Finder time:", finder_eval_results["total_finder_time"])
| _____no_output_____ | Apache-2.0 | tutorials/Tutorial5_Evaluation.ipynb | arthurbarros/haystack |
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y | # import libraries
import pandas as pd
from sqlalchemy import create_engine
import re
import nltk
import string
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score, precision_score, recall_score
nltk.download(['punkt', 'wordnet', 'stopwords'])
nltk.download(['averaged_perceptron_tagger'])
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
print(engine)
df = pd.read_sql_table('labeled_messages', engine)
X = df['message']
Y = df.drop(['message', 'genre', 'id', 'original'], axis=1) | Engine(sqlite:///DisasterResponse.db)
| MIT | notepads/ML Pipeline Preparation.ipynb | ranjeetraj2005/Disaster_Response_System |
2. Write a tokenization function to process your text data | stop_words = nltk.corpus.stopwords.words("english")
lemmatizer = nltk.stem.wordnet.WordNetLemmatizer()
remove_punc_table = str.maketrans('', '', string.punctuation)
def tokenize(text):
# normalize case and remove punctuation
text = text.translate(remove_punc_table).lower()
# tokenize text
tokens = nltk.word_tokenize(text)
# lemmatize and remove stop words
#return [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]
return [lemmatizer.lemmatize(word).lower().strip() for word in tokens if word not in stop_words]
| _____no_output_____ | MIT | notepads/ML Pipeline Preparation.ipynb | ranjeetraj2005/Disaster_Response_System |
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables. | forest_clf = RandomForestClassifier(n_estimators=10)
pipeline = Pipeline([
('tfidf', TfidfVectorizer(tokenizer=tokenize)),
('forest', MultiOutputClassifier(forest_clf))
]) | _____no_output_____ | MIT | notepads/ML Pipeline Preparation.ipynb | ranjeetraj2005/Disaster_Response_System |
4. Train pipeline- Split data into train and test sets- Train pipeline | X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
#X_train
pipeline.fit(X_train, Y_train) | _____no_output_____ | MIT | notepads/ML Pipeline Preparation.ipynb | ranjeetraj2005/Disaster_Response_System |
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each. | Y_pred = pipeline.predict(X_test)
for i, col in enumerate(Y_test):
print(col)
print(classification_report(Y_test[col], Y_pred[:, i])) | related
precision recall f1-score support
0.0 0.31 0.14 0.20 2096
1.0 0.76 0.90 0.82 6497
accuracy 0.71 8593
macro avg 0.54 0.52 0.51 8593
weighted avg 0.65 0.71 0.67 8593
request
precision recall f1-score support
0.0 0.84 0.98 0.90 7117
1.0 0.41 0.08 0.14 1476
accuracy 0.82 8593
macro avg 0.62 0.53 0.52 8593
weighted avg 0.76 0.82 0.77 8593
offer
precision recall f1-score support
0.0 0.99 1.00 1.00 8547
1.0 0.00 0.00 0.00 46
accuracy 0.99 8593
macro avg 0.50 0.50 0.50 8593
weighted avg 0.99 0.99 0.99 8593
aid_related
precision recall f1-score support
0.0 0.61 0.81 0.69 5105
1.0 0.46 0.23 0.31 3488
accuracy 0.58 8593
macro avg 0.53 0.52 0.50 8593
weighted avg 0.55 0.58 0.54 8593
medical_help
precision recall f1-score support
0.0 0.92 0.99 0.96 7924
1.0 0.08 0.01 0.01 669
accuracy 0.92 8593
macro avg 0.50 0.50 0.48 8593
weighted avg 0.86 0.92 0.88 8593
medical_products
precision recall f1-score support
0.0 0.95 1.00 0.97 8161
1.0 0.03 0.00 0.00 432
accuracy 0.95 8593
macro avg 0.49 0.50 0.49 8593
weighted avg 0.90 0.95 0.92 8593
search_and_rescue
precision recall f1-score support
0.0 0.97 1.00 0.99 8348
1.0 0.00 0.00 0.00 245
accuracy 0.97 8593
macro avg 0.49 0.50 0.49 8593
weighted avg 0.94 0.97 0.96 8593
security
precision recall f1-score support
0.0 0.98 1.00 0.99 8426
1.0 0.00 0.00 0.00 167
accuracy 0.98 8593
macro avg 0.49 0.50 0.50 8593
weighted avg 0.96 0.98 0.97 8593
military
precision recall f1-score support
0.0 0.97 1.00 0.98 8328
1.0 0.00 0.00 0.00 265
accuracy 0.97 8593
macro avg 0.48 0.50 0.49 8593
weighted avg 0.94 0.97 0.95 8593
child_alone
precision recall f1-score support
0.0 1.00 1.00 1.00 8593
accuracy 1.00 8593
macro avg 1.00 1.00 1.00 8593
weighted avg 1.00 1.00 1.00 8593
water
precision recall f1-score support
0.0 0.94 1.00 0.96 8038
1.0 0.09 0.01 0.01 555
accuracy 0.93 8593
macro avg 0.51 0.50 0.49 8593
weighted avg 0.88 0.93 0.90 8593
food
| MIT | notepads/ML Pipeline Preparation.ipynb | ranjeetraj2005/Disaster_Response_System |
6. Improve your modelUse grid search to find better parameters. | '''
parameters = {
'tfidf__ngram_range': ((1, 1), (1, 2)),
'tfidf__max_df': (0.8, 1.0),
'tfidf__max_features': (None, 10000),
'forest__estimator__n_estimators': [50, 100],
'forest__estimator__min_samples_split': [2, 4]
}
'''
parameters = {
'tfidf__ngram_range': ((1, 1), (1, 2))
}
cv = GridSearchCV(pipeline, parameters, cv=3, n_jobs=-1, verbose= 10)
cv.fit(X_train, Y_train) | Fitting 3 folds for each of 2 candidates, totalling 6 fits
[CV] tfidf__ngram_range=(1, 1) .......................................
[CV] ........... tfidf__ngram_range=(1, 1), score=0.139, total= 48.2s
[CV] tfidf__ngram_range=(1, 1) .......................................
| MIT | notepads/ML Pipeline Preparation.ipynb | ranjeetraj2005/Disaster_Response_System |
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio! | def evaluate_model(model, X_test, Y_test):
Y_pred = model.predict(X_test)
print(classification_report(Y_test, Y_pred, target_names=category_names))
# print('Accuracy: ', accuracy_score(Y_test, Y_pred))
# print('Precision: ', precision_score(Y_test, Y_pred, average='weighted'))
# print('Recall: ', recall_score(Y_test, Y_pred, average='weighted'))
print('Accuracy: ', accuracy_score(Y_test, Y_pred))
print('Precision: ', precision_score(Y_test, Y_pred, average='weighted'))
print('Recall: ', recall_score(Y_test, Y_pred, average='weighted')) | Accuracy: 0.144652624229
Precision: 0.400912141504
Recall: 0.277450871544
| MIT | notepads/ML Pipeline Preparation.ipynb | ranjeetraj2005/Disaster_Response_System |
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF | from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.model_selection import train_test_split
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier,AdaBoostClassifier
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, accuracy_score, f1_score, fbeta_score, classification_report
from scipy.stats import hmean
from scipy.stats.mstats import gmean
def tokenize_text(text):
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize_text(sentence))
if len(pos_tags) == 0:
print('pos_tags:', pos_tags)
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, X, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
def new_model_pipeline():
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', MultiOutputClassifier(AdaBoostClassifier()))
])
return pipeline
def multioutput_fscore(y_true,y_pred,beta=1):
score_list = []
if isinstance(y_pred, pd.DataFrame) == True:
y_pred = y_pred.values
if isinstance(y_true, pd.DataFrame) == True:
y_true = y_true.values
for column in range(0,y_true.shape[1]):
score = fbeta_score(y_true[:,column],y_pred[:,column],beta,average='weighted')
score_list.append(score)
f1score_numpy = np.asarray(score_list)
f1score_numpy = f1score_numpy[f1score_numpy<1]
f1score = gmean(f1score_numpy)
return f1score
model = new_model_pipeline()
parameters = {
'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
# 'features__text_pipeline__vect__max_df': (0.75, 1.0),
# 'features__text_pipeline__vect__max_features': (None, 5000),
# 'features__text_pipeline__tfidf__use_idf': (True, False),
# 'clf__n_estimators': [10, 100],
# 'clf__learning_rate': [0.01, 0.1],
# 'features__transformer_weights': (
# {'text_pipeline': 1, 'starting_verb': 0.5},
# {'text_pipeline': 0.5, 'starting_verb': 1},
# {'text_pipeline': 0.8, 'starting_verb': 1},
# )
}
scorer = make_scorer(multioutput_fscore,greater_is_better = True)
cv = GridSearchCV(model, param_grid=parameters, scoring = scorer,verbose = 2, n_jobs = -1)
cv.fit(X_train, Y_train) | Fitting 5 folds for each of 2 candidates, totalling 10 fits
[CV] features__text_pipeline__vect__ngram_range=(1, 1) ...............
[CV] features__text_pipeline__vect__ngram_range=(1, 1), total= 1.9min
[CV] features__text_pipeline__vect__ngram_range=(1, 1) ...............
| MIT | notepads/ML Pipeline Preparation.ipynb | ranjeetraj2005/Disaster_Response_System |
9. Export your model as a pickle file | import joblib
joblib.dump(cv.best_estimator_, 'disaster_model.pkl') | _____no_output_____ | MIT | notepads/ML Pipeline Preparation.ipynb | ranjeetraj2005/Disaster_Response_System |
TensorFlow实现VGG16 导入需要使用的库 | import inspect
import os
import numpy as np
import tensorflow as tf | D:\anaconda\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
| Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义卷积层 | '''Convolution op wrapper, use RELU activation after convolution
Args:
layer_name: e.g. conv1, pool1...
x: input tensor, [batch_size, height, width, channels]
out_channels: number of output channels (or comvolutional kernels)
kernel_size: the size of convolutional kernel, VGG paper used: [3,3]
stride: A list of ints. 1-D of length 4. VGG paper used: [1, 1, 1, 1]
is_pretrain: if load pretrained parameters, freeze all conv layers.
Depending on different situations, you can just set part of conv layers to be freezed.
the parameters of freezed layers will not change when training.
Returns:
4D tensor
'''
def conv_layer(layer_name, x, out_channels, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=True):
in_channels = x.get_shape()[-1]
with tf.variable_scope(layer_name):
w = tf.get_variable(name='weights',
trainable=is_pretrain,
shape=[kernel_size[0], kernel_size[1], in_channels, out_channels],
initializer=tf.contrib.layers.xavier_initializer()) # default is uniform distribution initialization
b = tf.get_variable(name='biases',
trainable=is_pretrain,
shape=[out_channels],
initializer=tf.constant_initializer(0.0))
x = tf.nn.conv2d(x, w, stride, padding='SAME', name='conv')
x = tf.nn.bias_add(x, b, name='bias_add')
x = tf.nn.relu(x, name='relu')
return x
| _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义池化层 | '''Pooling op
Args:
x: input tensor
kernel: pooling kernel, VGG paper used [1,2,2,1], the size of kernel is 2X2
stride: stride size, VGG paper used [1,2,2,1]
padding:
is_max_pool: boolen
if True: use max pooling
else: use avg pooling
'''
def pool(layer_name, x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True):
if is_max_pool:
x = tf.nn.max_pool(x, kernel, strides=stride, padding='SAME', name=layer_name)
else:
x = tf.nn.avg_pool(x, kernel, strides=stride, padding='SAME', name=layer_name)
return x | _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义全连接层 | '''Wrapper for fully connected layers with RELU activation as default
Args:
layer_name: e.g. 'FC1', 'FC2'
x: input feature map
out_nodes: number of neurons for current FC layer
'''
def fc_layer(layer_name, x, out_nodes,keep_prob=0.8):
shape = x.get_shape()
# 处理没有预先做flatten的输入
if len(shape) == 4:
size = shape[1].value * shape[2].value * shape[3].value
else:
size = shape[-1].value
with tf.variable_scope(layer_name):
w = tf.get_variable('weights',
shape=[size, out_nodes],
initializer=tf.contrib.layers.xavier_initializer())
b = tf.get_variable('biases',
shape=[out_nodes],
initializer=tf.constant_initializer(0.0))
flat_x = tf.reshape(x, [-1, size]) # flatten into 1D
x = tf.nn.bias_add(tf.matmul(flat_x, w), b)
x = tf.nn.relu(x)
x = tf.nn.dropout(x, keep_prob)
return x
| _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义VGG16网络 | def vgg16_net(x, n_classes, is_pretrain=True):
with tf.name_scope('VGG16'):
x = conv_layer('conv1_1', x, 64, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
x = conv_layer('conv1_2', x, 64, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
with tf.name_scope('pool1'):
x = pool('pool1', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True)
x = conv_layer('conv2_1', x, 128, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
x = conv_layer('conv2_2', x, 128, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
with tf.name_scope('pool2'):
x = pool('pool2', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True)
x = conv_layer('conv3_1', x, 256, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
x = conv_layer('conv3_2', x, 256, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
x = conv_layer('conv3_3', x, 256, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
with tf.name_scope('pool3'):
x = pool('pool3', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True)
x = conv_layer('conv4_1', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
x = conv_layer('conv4_2', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
x = conv_layer('conv4_3', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
with tf.name_scope('pool4'):
x = pool('pool4', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True)
x = conv_layer('conv5_1', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
x = conv_layer('conv5_2', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
x = conv_layer('conv5_3', x, 512, kernel_size=[3,3], stride=[1,1,1,1], is_pretrain=is_pretrain)
with tf.name_scope('pool5'):
x = pool('pool5', x, kernel=[1,2,2,1], stride=[1,2,2,1], is_max_pool=True)
x = fc_layer('fc6', x, out_nodes=4096)
assert x.get_shape().as_list()[1:] == [4096]
x = fc_layer('fc7', x, out_nodes=4096)
fc8 = fc_layer('fc8', x, out_nodes=n_classes)
# softmax = tf.nn.softmax(fc8)
return x
| _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义损失函数采用交叉熵计算损失 | '''Compute loss
Args:
logits: logits tensor, [batch_size, n_classes]
labels: one-hot labels
'''
def loss(logits, labels):
with tf.name_scope('loss') as scope:
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels,name='cross-entropy')
loss = tf.reduce_mean(cross_entropy, name='loss')
tf.summary.scalar(scope+'/loss', loss)
return loss | _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义准确率 | '''
Evaluate the quality of the logits at predicting the label.
Args:
logits: Logits tensor, float - [batch_size, NUM_CLASSES].
labels: Labels tensor,
'''
def accuracy(logits, labels):
with tf.name_scope('accuracy') as scope:
correct = tf.equal(tf.arg_max(logits, 1), tf.arg_max(labels, 1))
correct = tf.cast(correct, tf.float32)
accuracy = tf.reduce_mean(correct)*100.0
tf.summary.scalar(scope+'/accuracy', accuracy)
return accuracy | _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义优化函数 | def optimize(loss, learning_rate, global_step):
'''optimization, use Gradient Descent as default
'''
with tf.name_scope('optimizer'):
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
#optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss, global_step=global_step)
return train_op | _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义加载模型函数 | def load_with_skip(data_path, session, skip_layer):
data_dict = np.load(data_path, encoding='latin1').item()
for key in data_dict:
if key not in skip_layer:
with tf.variable_scope(key, reuse=True):
for subkey, data in zip(('weights', 'biases'), data_dict[key]):
session.run(tf.get_variable(subkey).assign(data)) | _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义训练图片读取函数 | def read_cifar10(data_dir, is_train, batch_size, shuffle):
"""Read CIFAR10
Args:
data_dir: the directory of CIFAR10
is_train: boolen
batch_size:
shuffle:
Returns:
label: 1D tensor, tf.int32
image: 4D tensor, [batch_size, height, width, 3], tf.float32
"""
img_width = 32
img_height = 32
img_depth = 3
label_bytes = 1
image_bytes = img_width*img_height*img_depth
with tf.name_scope('input'):
if is_train:
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' %ii)
for ii in np.arange(1, 6)]
else:
filenames = [os.path.join(data_dir, 'test_batch.bin')]
filename_queue = tf.train.string_input_producer(filenames)
reader = tf.FixedLengthRecordReader(label_bytes + image_bytes)
key, value = reader.read(filename_queue)
record_bytes = tf.decode_raw(value, tf.uint8)
label = tf.slice(record_bytes, [0], [label_bytes])
label = tf.cast(label, tf.int32)
image_raw = tf.slice(record_bytes, [label_bytes], [image_bytes])
image_raw = tf.reshape(image_raw, [img_depth, img_height, img_width])
image = tf.transpose(image_raw, (1,2,0)) # convert from D/H/W to H/W/D
image = tf.cast(image, tf.float32)
# # data argumentation
# image = tf.random_crop(image, [24, 24, 3])# randomly crop the image size to 24 x 24
# image = tf.image.random_flip_left_right(image)
# image = tf.image.random_brightness(image, max_delta=63)
# image = tf.image.random_contrast(image,lower=0.2,upper=1.8)
image = tf.image.per_image_standardization(image) #substract off the mean and divide by the variance
if shuffle:
images, label_batch = tf.train.shuffle_batch(
[image, label],
batch_size = batch_size,
num_threads= 64,
capacity = 20000,
min_after_dequeue = 3000)
else:
images, label_batch = tf.train.batch(
[image, label],
batch_size = batch_size,
num_threads = 64,
capacity= 2000)
## ONE-HOT
n_classes = 10
label_batch = tf.one_hot(label_batch, depth= n_classes)
label_batch = tf.cast(label_batch, dtype=tf.int32)
label_batch = tf.reshape(label_batch, [batch_size, n_classes])
return images, label_batch | _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
定义训练函数 | IMG_W = 32
IMG_H = 32
N_CLASSES = 10
BATCH_SIZE = 32
learning_rate = 0.01
MAX_STEP = 10 # it took me about one hour to complete the training.
IS_PRETRAIN = False
image_size = 224 # 输入图像尺寸
images = tf.Variable(tf.random_normal([batch_size, image_size, image_size, 3], dtype=tf.float32, stddev=1e-1))
vgg16_net(images,keep_prob)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
def train():
pre_trained_weights = './/vgg16_pretrain//vgg16.npy'
data_dir = './/data//cifar-10-batches-bin//'
train_log_dir = './/logs//train//'
val_log_dir = './/logs//val//'
with tf.name_scope('input'):
tra_image_batch, tra_label_batch = read_cifar10(data_dir=data_dir,
is_train=True,
batch_size= BATCH_SIZE,
shuffle=True)
val_image_batch, val_label_batch = read_cifar10(data_dir=data_dir,
is_train=False,
batch_size= BATCH_SIZE,
shuffle=False)
x = tf.placeholder(tf.float32, shape=[BATCH_SIZE, IMG_W, IMG_H, 3])
y_ = tf.placeholder(tf.int16, shape=[BATCH_SIZE, N_CLASSES])
logits = vgg16_net(x, N_CLASSES, IS_PRETRAIN)
loss_1 = loss(logits, y_)
accuracy = accuracy(logits, y_)
my_global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimize(loss_1, learning_rate, my_global_step)
saver = tf.train.Saver(tf.global_variables())
summary_op = tf.summary.merge_all()
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print(x.shape())
print(y_.shape())
if(IS_PRETRAIN):
load_with_skip(pre_trained_weights, sess, ['fc6','fc7','fc8'])
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
tra_summary_writer = tf.summary.FileWriter(train_log_dir, sess.graph)
val_summary_writer = tf.summary.FileWriter(val_log_dir, sess.graph)
try:
for step in np.arange(MAX_STEP):
if coord.should_stop():
break
tra_images,tra_labels = sess.run([tra_image_batch, tra_label_batch])
_, tra_loss, tra_acc = sess.run([train_op, loss, accuracy],
feed_dict={x:tra_images, y_:tra_labels})
if step % 50 == 0 or (step + 1) == MAX_STEP:
print ('Step: %d, loss: %.4f, accuracy: %.4f%%' % (step, tra_loss, tra_acc))
summary_str = sess.run(summary_op)
tra_summary_writer.add_summary(summary_str, step)
if step % 200 == 0 or (step + 1) == MAX_STEP:
val_images, val_labels = sess.run([val_image_batch, val_label_batch])
val_loss, val_acc = sess.run([loss, accuracy],
feed_dict={x:val_images,y_:val_labels})
print('** Step %d, val loss = %.2f, val accuracy = %.2f%% **' %(step, val_loss, val_acc))
summary_str = sess.run(summary_op)
val_summary_writer.add_summary(summary_str, step)
if step % 2000 == 0 or (step + 1) == MAX_STEP:
checkpoint_path = os.path.join(train_log_dir, 'model.ckpt')
saver.save(sess, checkpoint_path, global_step=step)
except tf.errors.OutOfRangeError:
print('Done training -- epoch limit reached')
finally:
coord.request_stop()
coord.join(threads)
train() | _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
VGG16使用 | def time_tensorflow_run(session, target, feed, info_string):
num_steps_burn_in = 10 # 预热轮数
total_duration = 0.0 # 总时间
total_duration_squared = 0.0 # 总时间的平方和用以计算方差
for i in range(num_batches + num_steps_burn_in):
start_time = time.time()
_ = session.run(target,feed_dict=feed)
duration = time.time() - start_time
if i >= num_steps_burn_in: # 只考虑预热轮数之后的时间
if not i % 10:
print('%s:step %d,duration = %.3f' % (datetime.now(), i - num_steps_burn_in, duration))
total_duration += duration
total_duration_squared += duration * duration
mn = total_duration / num_batches # 平均每个batch的时间
vr = total_duration_squared / num_batches - mn * mn # 方差
sd = math.sqrt(vr) # 标准差
print('%s: %s across %d steps, %.3f +/- %.3f sec/batch' % (datetime.now(), info_string, num_batches, mn, sd))
def run_benchmark():
with tf.Graph().as_default():
'''定义图片尺寸224,利用tf.random_normal函数生成标准差为0.1的正态分布的随机数来构建224x224的随机图片'''
image_size = 224 # 输入图像尺寸
images = tf.Variable(tf.random_normal([batch_size, image_size, image_size, 3], dtype=tf.float32, stddev=1e-1))
#构建keep_prob的placeholder
keep_prob = tf.placeholder(tf.float32)
prediction,softmax,fc8,p = vgg16_net(images,keep_prob)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
#设置keep_prob为1.0,运用time_tensorflow_run来评测forward运算随机
time_tensorflow_run(sess, prediction,{keep_prob:1.0}, "Forward")
# 用以模拟训练的过程
objective = tf.nn.l2_loss(fc8) # 给一个loss
grad = tf.gradients(objective, p) # 相对于loss的 所有模型参数的梯度
#评测backward运算时间
time_tensorflow_run(sess, grad, {keep_prob:0.5},"Forward-backward")
batch_size = 32
num_batches = 100
run_benchmark() | _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
其他参数 | # Construct model
pred = conv_net(x, weights, biases, keep_prob)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
saver=tf.train.Saver() | _____no_output_____ | Apache-2.0 | VGG16/TensorFlow/.ipynb_checkpoints/vgg16_tensorflow-checkpoint.ipynb | user-ZJ/deep-learning |
Project: Part of Speech Tagging with Hidden Markov Models --- IntroductionPart of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more. The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! **Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files. **Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. The Road AheadYou must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.- [Step 1](Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus- [Step 2](Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline- [Step 3](Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline- [Step 4](Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine. | # Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers, tests
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from IPython.core.display import HTML
from itertools import chain
from collections import Counter, defaultdict
from helpers import show_model, Dataset
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Step 1: Read and preprocess the dataset---We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.Example from the Brown corpus. ```b100-38532Perhaps ADVit PRONwas VERBright ADJ; .; .b100-35577...``` | data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8)
print("There are {} sentences in the corpus.".format(len(data)))
print("There are {} sentences in the training set.".format(len(data.training_set)))
print("There are {} sentences in the testing set.".format(len(data.testing_set)))
assert len(data) == len(data.training_set) + len(data.testing_set), \
"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus" | There are 57340 sentences in the corpus.
There are 45872 sentences in the training set.
There are 11468 sentences in the testing set.
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
The Dataset InterfaceYou can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.```Dataset-only Attributes: training_set - reference to a Subset object containing the samples for training testing_set - reference to a Subset object containing the samples for testingDataset & Subset Attributes: sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus vocab - an immutable collection of the unique words in the corpus tagset - an immutable collection of the unique tags in the corpus X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...) Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...) N - returns the number of distinct samples (individual words or tags) in the datasetMethods: stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus __iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs __len__() - returns the nubmer of sentences in the dataset```For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:```subset.keys == {"s1", "s0"} unorderedsubset.vocab == {"See", "run", "ran", "Spot"} unorderedsubset.tagset == {"VERB", "NOUN"} unorderedsubset.X == (("Spot", "ran"), ("See", "Spot", "run")) order matches .keyssubset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) order matches .keyssubset.N == 7 there are a total of seven observations over all sentenceslen(subset) == 2 because there are two sentences```**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data. Sentences`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`. | key = 'b100-38532'
print("Sentence: {}".format(key))
print("words:\n\t{!s}".format(data.sentences[key].words))
print("tags:\n\t{!s}".format(data.sentences[key].tags)) | Sentence: b100-38532
words:
('Perhaps', 'it', 'was', 'right', ';', ';')
tags:
('ADV', 'PRON', 'VERB', 'ADJ', '.', '.')
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data. Counting Unique ElementsYou can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`. | print("There are a total of {} samples of {} unique words in the corpus."
.format(data.N, len(data.vocab)))
print("There are {} samples of {} unique words in the training set."
.format(data.training_set.N, len(data.training_set.vocab)))
print("There are {} samples of {} unique words in the testing set."
.format(data.testing_set.N, len(data.testing_set.vocab)))
print("There are {} words in the test set that are missing in the training set."
.format(len(data.testing_set.vocab - data.training_set.vocab)))
assert data.N == data.training_set.N + data.testing_set.N, \
"The number of training + test samples should sum to the total number of samples" | There are a total of 1161192 samples of 56057 unique words in the corpus.
There are 928458 samples of 50536 unique words in the training set.
There are 232734 samples of 25112 unique words in the testing set.
There are 5521 words in the test set that are missing in the training set.
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Accessing word and tag SequencesThe `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset. | # accessing words with Dataset.X and tags with Dataset.Y
for i in range(2):
print("Sentence {}:".format(i + 1), data.X[i])
print()
print("Labels {}:".format(i + 1), data.Y[i])
print() | Sentence 1: ('Mr.', 'Podger', 'had', 'thanked', 'him', 'gravely', ',', 'and', 'now', 'he', 'made', 'use', 'of', 'the', 'advice', '.')
Labels 1: ('NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence 2: ('But', 'there', 'seemed', 'to', 'be', 'some', 'difference', 'of', 'opinion', 'as', 'to', 'how', 'far', 'the', 'board', 'should', 'go', ',', 'and', 'whose', 'advice', 'it', 'should', 'follow', '.')
Labels 2: ('CONJ', 'PRT', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADP', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'VERB', '.', 'CONJ', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', '.')
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Accessing (word, tag) SamplesThe `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus. | # use Dataset.stream() (word, tag) samples for the entire corpus
print("\nStream (word, tag) pairs:\n")
for i, pair in enumerate(data.stream()):
print("\t", pair)
if i > 5: break |
Stream (word, tag) pairs:
('Mr.', 'NOUN')
('Podger', 'NOUN')
('had', 'VERB')
('thanked', 'VERB')
('him', 'PRON')
('gravely', 'ADV')
(',', '.')
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts. Step 2: Build a Most Frequent Class tagger---Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus. IMPLEMENTATION: Pair CountsComplete the function below that computes the joint frequency counts for two input sequences. | def pair_counts(sequences_A, sequences_B):
"""Return a dictionary keyed to each unique value in the first sequence list
that counts the number of occurrences of the corresponding value from the
second sequences list.
For example, if sequences_A is tags and sequences_B is the corresponding
words, then if 1244 sequences contain the word "time" tagged as a NOUN, then
you should return a dictionary such that pair_counts[NOUN][time] == 1244
"""
pairs = defaultdict(lambda: defaultdict(int))
for (tag, word) in zip(sequences_A, sequences_B):
pairs[tag][word]+=1
return pairs
tags = [t for i, (w, t) in enumerate(data.stream())]
words = [w for i, (w, t) in enumerate(data.stream())]
# Calculate C(t_i, w_i)
emission_counts = pair_counts(tags, words)
print(emission_counts.keys())
assert len(emission_counts) == 12, \
"Uh oh. There should be 12 tags in your dictionary."
assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \
"Hmmm...'time' is expected to be the most common NOUN."
HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>') | dict_keys(['NOUN', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADP', 'DET', 'PRT', 'ADJ', 'X', 'NUM'])
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
IMPLEMENTATION: Most Frequent Class TaggerUse the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably. | # Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word
from collections import namedtuple
FakeState = namedtuple("FakeState", "name")
class MFCTagger:
# NOTE: You should not need to modify this class or any of its methods
missing = FakeState(name="<MISSING>")
def __init__(self, table):
self.table = defaultdict(lambda: MFCTagger.missing)
self.table.update({word: FakeState(name=tag) for word, tag in table.items()})
def viterbi(self, seq):
"""This method simplifies predictions by matching the Pomegranate viterbi() interface"""
return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"]))
# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not
# the same as the emission probabilities) and use it to fill the mfc_table
tags = [t for i, (w, t) in enumerate(data.training_set.stream())]
words = [w for i, (w, t) in enumerate(data.training_set.stream())]
word_counts = pair_counts(words, tags)
mfc_table = {}
for w, t in word_counts.items():
mfc_table[w] = max(t.keys(), key=lambda key: t[key])
#dict((word, max(tags.keys(), key=lambda key: tags[key])) for word, tags in word_counts.items())
# DO NOT MODIFY BELOW THIS LINE
mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance
assert len(mfc_table) == len(data.training_set.vocab), ""
assert all(k in data.training_set.vocab for k in mfc_table.keys()), ""
assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, ""
HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>') | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Making Predictions with a ModelThe helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger. | def replace_unknown(sequence):
"""Return a copy of the input sequence where each unknown word is replaced
by the literal string value 'nan'. Pomegranate will ignore these values
during computation.
"""
return [w if w in data.training_set.vocab else 'nan' for w in sequence]
def simplify_decoding(X, model):
"""X should be a 1-D sequence of observations for the model to predict"""
_, state_path = model.viterbi(replace_unknown(X))
return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Example Decoding Sequences with MFC Tagger | for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, mfc_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n") | Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', '<MISSING>', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADV', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Evaluating Model AccuracyThe function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus. | def accuracy(X, Y, model):
"""Calculate the prediction accuracy by using the model to decode each sequence
in the input X and comparing the prediction with the true labels in Y.
The X should be an array whose first dimension is the number of sentences to test,
and each element of the array should be an iterable of the words in the sequence.
The arrays X and Y should have the exact same shape.
X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...]
Y = [(), (), ...]
"""
correct = total_predictions = 0
for observations, actual_tags in zip(X, Y):
# The model.viterbi call in simplify_decoding will return None if the HMM
# raises an error (for example, if a test sentence contains a word that
# is out of vocabulary for the training set). Any exception counts the
# full sentence as an error (which makes this a conservative estimate).
try:
most_likely_tags = simplify_decoding(observations, model)
correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))
except:
pass
total_predictions += len(observations)
return correct / total_predictions | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Evaluate the accuracy of the MFC taggerRun the next cell to evaluate the accuracy of the tagger on the training and test corpus. | mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)
print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc))
mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)
print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc))
assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right."
assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>') | training accuracy mfc_model: 95.72%
testing accuracy mfc_model: 93.01%
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Step 3: Build an HMM tagger---The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:$$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information. IMPLEMENTATION: Unigram CountsComplete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)$$P(tag_1) = \frac{C(tag_1)}{N}$$ | def unigram_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequence list that
counts the number of occurrences of the value in the sequences list. The sequences
collection should be a 2-dimensional array.
For example, if the tag NOUN appears 275558 times over all the input sequences,
then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.
"""
# TODO: Finish this function!
counts = {}
for i, sentence in enumerate(sequences):
for x, y in enumerate(sentence):
counts[y] = counts[y]+1 if y in counts else 1
return(counts)
# TODO: call unigram_counts with a list of tag sequences from the training set
tag_unigrams = unigram_counts(data.training_set.Y)
assert set(tag_unigrams.keys()) == data.training_set.tagset, \
"Uh oh. It looks like your tag counts doesn't include all the tags!"
assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \
"Hmmm...'X' is expected to be the least common class"
assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \
"Hmmm...'NOUN' is expected to be the most common class"
HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>') | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
IMPLEMENTATION: Bigram CountsComplete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$ | def bigram_counts(sequences):
"""Return a dictionary keyed to each unique PAIR of values in the input sequences
list that counts the number of occurrences of pair in the sequences list. The input
should be a 2-dimensional array.
For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should
return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582
"""
counts = {}
for i, sentence in enumerate(sequences):
for y in range(len(sentence) - 1):
counts[(sentence[y], sentence[y+1])] = counts[(sentence[y], sentence[y+1])] + 1 if (sentence[y], sentence[y+1]) in counts else 1
return counts
# TODO: call bigram_counts with a list of tag sequences from the training set
tag_bigrams = bigram_counts(data.training_set.Y)
assert len(tag_bigrams) == 144, \
"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)"
assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \
"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')."
assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \
"Hmmm...('DET', 'NOUN') is expected to be the most common bigram."
HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>') | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
IMPLEMENTATION: Sequence Starting CountsComplete the code below to estimate the bigram probabilities of a sequence starting with each tag. | def starting_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the beginning of
a sequence.
For example, if 8093 sequences start with NOUN, then you should return a
dictionary such that your_starting_counts[NOUN] == 8093
"""
counts = {}
for i, sentence in enumerate(sequences):
counts[sentence[0]] = counts[sentence[0]] + 1 if sentence[0] in counts else 1
return counts
# TODO: Calculate the count of each tag starting a sequence
tag_starts = starting_counts(data.training_set.Y)
assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram."
assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram."
HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>') | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
IMPLEMENTATION: Sequence Ending CountsComplete the function below to estimate the bigram probabilities of a sequence ending with each tag. | def ending_counts(sequences):
"""Return a dictionary keyed to each unique value in the input sequences list
that counts the number of occurrences where that value is at the end of
a sequence.
For example, if 18 sequences end with DET, then you should return a
dictionary such that your_starting_counts[DET] == 18
"""
counts = {}
for i, sentence in enumerate(sequences):
index = len(sentence) -1
counts[sentence[index]] = counts[sentence[index]] + 1 if sentence[index] in counts else 1
return counts
# TODO: Calculate the count of each tag ending a sequence
tag_ends = ending_counts(data.training_set.Y)
assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary."
assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram."
assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram."
HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>') | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
IMPLEMENTATION: Basic HMM TaggerUse the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.- Add one state per tag - The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$- Add an edge from the starting state `basic_model.start` to each tag - The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$- Add an edge from each tag to the end state `basic_model.end` - The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$- Add an edge between _every_ pair of tags - The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$ | basic_model = HiddenMarkovModel(name="base-hmm-tagger")
# TODO: create states with emission probability distributions P(word | tag) and add to the model
# (Hint: you may need to loop & create/add new states)
states = []
for tag in data.training_set.tagset:
tag_distribution = {word: emission_counts[tag][word]/tag_unigrams[tag] for word in set(emission_counts[tag])}
tag_emissions = DiscreteDistribution(tag_distribution)
tag_state = State(tag_emissions, name=tag)
states.append(tag_state)
basic_model.add_states()
# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)
# (Hint: you may need to loop & add transitions
for state in states:
basic_model.add_transition(basic_model.start, state, tag_starts[state.name]/sum(tag_starts.values()))
for state in states:
basic_model.add_transition(state, basic_model.end, tag_ends[state.name]/tag_unigrams[state.name])
for state1 in states:
for state2 in states:
basic_model.add_transition(state1, state2, tag_bigrams[(state1.name,state2.name)]/tag_unigrams[state1.name])
# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE
# finalize the model
basic_model.bake()
assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \
"Every state in your network should use the name of the associated tag, which must be one of the training set tags."
assert basic_model.edge_count() == 168, \
("Your network should have an edge from the start node to each state, one edge between every " +
"pair of tags (states), and an edge from each state to the end node.")
HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>')
hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)
print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc))
hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)
print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc))
assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right."
assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right."
HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>') | training accuracy basic hmm model: 97.54%
testing accuracy basic hmm model: 96.16%
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Example Decoding Sequences with the HMM Tagger | for key in data.testing_set.keys[:3]:
print("Sentence Key: {}\n".format(key))
print("Predicted labels:\n-----------------")
print(simplify_decoding(data.sentences[key].words, basic_model))
print()
print("Actual labels:\n--------------")
print(data.sentences[key].tags)
print("\n") | Sentence Key: b100-28144
Predicted labels:
-----------------
['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']
Actual labels:
--------------
('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')
Sentence Key: b100-23146
Predicted labels:
-----------------
['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']
Actual labels:
--------------
('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')
Sentence Key: b100-35462
Predicted labels:
-----------------
['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.']
Actual labels:
--------------
('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
| MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Finishing the project---**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review. | !!jupyter nbconvert *.ipynb | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
Step 4: [Optional] Improving model performance---There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts) Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.- Backoff Smoothing Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.- Extending to Trigrams HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two. Obtain the Brown Corpus with a Larger TagsetRun the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets. | import nltk
from nltk import pos_tag, word_tokenize
from nltk.corpus import brown
nltk.download('brown')
training_corpus = nltk.corpus.brown
training_corpus.tagged_sents()[0] | _____no_output_____ | MIT | HMM Tagger.ipynb | DeepanshKhurana/udacityproject-hmm-tagger-nlp |
**READING IN KINESSO DATA** | imp = pd.read_csv('impressions_one_hour.csv')
imp = imp[imp['country'] == 'Germany']
imp = imp[~imp['zip_code'].isna()]
imp['zip_code'] = imp['zip_code'].astype(str) | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
**READING IN 2011 GERMANY CENSUS DATA** | # data from: https://www.suche-postleitzahl.org/downloads
zip_codes = pd.read_csv("plz_einwohner.csv")
def add_zero(x):
if len(x) == 4:
return '0'+ x
else:
return x | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
**CORRECTING FORMATTING ERROR THAT REMOVED INITIAL '0' FROM ZIPCODES** | zip_codes['zipcode'] = zip_codes['zipcode'].astype(str).apply(add_zero)
zip_codes.head() | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
Real Population of Germany is 83.02 million | np.sum(zip_codes['population']) | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
**CALCULATING VALUE COUNTS FROM KINESSO DATA** | val_cou = imp['zip_code'].value_counts()
val_counts = pd.DataFrame(columns=['zipcode', 'count'], data=val_cou)
val_counts['zipcode'] = val_cou.index.astype(str)
val_counts['count'] = val_cou.values.astype(int)
val_counts | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
**MERGING TOGETHER KINESSO VALUE COUNTS WITH KINESSO DATA***ONLY 19 ZIPCODES DO NOT HAVE CENSUS DATA* | population_count = val_counts.merge(right=zip_codes, right_on='zipcode', left_on='zipcode', how='outer')
population_count_f = population_count.dropna()
#only 19 zipcodes without data
len(population_count[population_count['population'].isna()]) | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
Here count is the observed number from the Kinesso dataset and population is the expected number from census dataset | population_count_f | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
**CALCULATING DEVICE FREQUENCIES FOR EACH ZIPCODE** | imp['count'] = [1] * len(imp)
device_model_make_counts = imp.groupby(['zip_code', 'device_make', 'device_model'], as_index=False).count()[['zip_code', 'device_make', 'device_model', 'count']]
total_calc = device_model_make_counts.groupby(['zip_code']).sum()
percent_calc = []
for i in device_model_make_counts.index:
zipc = device_model_make_counts.iloc[i]['zip_code']
percent_calc = np.append(percent_calc, device_model_make_counts.iloc[i]['count']/total_calc.loc[zipc])
device_model_make_counts['device % freq']= percent_calc *100
device_model_make_counts['combined'] = device_model_make_counts['device_make'] + ' ' + device_model_make_counts['device_model']
device_model_make_counts['zip_code'] = device_model_make_counts['zip_code'].astype(str).apply(add_zero)
device_model_make_counts
| _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
**CALCULATING PERCENT DIFFERENCE BETWEEN EXPECTED AND OBSERVED POPULATIONS FOR EACH ZIPCODE** | population_count_f['population % expected'] = (population_count_f['population']/sum(population_count_f['population']))*100
population_count_f['population % observed'] = (population_count_f['count']/sum(population_count_f['count']))*100
population_count_f['% difference'] = population_count_f['population % observed'] - population_count_f['population % expected']
population_count_f = population_count_f.rename(columns={'count':'observed population', 'population':'expected population'})
population_count_f | <ipython-input-46-cfa73adf08fd>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
population_count_f['population % expected'] = (population_count_f['population']/sum(population_count_f['population']))*100
<ipython-input-46-cfa73adf08fd>:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
population_count_f['population % observed'] = (population_count_f['count']/sum(population_count_f['count']))*100
<ipython-input-46-cfa73adf08fd>:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
population_count_f['% difference'] = population_count_f['population % observed'] - population_count_f['population % expected']
| Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
*MERGING TOGETHER WITH DEVICE FREQUENCY DATA* | combo = device_model_make_counts.merge(right=population_count_f, right_on='zipcode', left_on='zip_code', how='outer').drop(['count', 'device_make', 'device_model'], axis=1)
combined_impressions = combo.sort_values('% difference', ascending=False)
combined_impressions | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
*GROUPING TO IDENTIFY MOST COMMONLY USED DEVICE* | most_common_device = combined_impressions.groupby(['zip_code']).max()
most_common_device | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
*IDENTIFYING MOST UNDER REPRESENTED ZIP CODES* | underrepresented = most_common_device.sort_values('% difference').head(1000)
underrepresented.head(10)
underrepresented['combined'].value_counts() | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
*IDENTIFYING MOST OVER REPRESENTED ZIP CODES* | overrepresented = most_common_device.sort_values('% difference', ascending=False).head(1000)
overrepresented.head(10)
overrepresented['combined'].value_counts() | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
**I actually decided not to look to closely into the device frequency numbers because for the underrepresented zipcodes there's only like 8-9 people Kinesso advertised to-- and mostly to Apple users interestingly. Instead I did some digging into the top 10 and bottom 10 in a seperate google docs titled: top 10 zipcode investigation** *quick summary: **over represented** zip codes belong to large urban cities with more industries and probably higher incomes but idk because I couldn't find zip code specific salary data**under represented:** zip codes belong to small cities with industries like coal, tourism, and power plants. Also I suspect lower incomes, but idk for sure | sns.distplot(most_common_device['% difference']) | _____no_output_____ | Apache-2.0 | student-projects/fall-2020/Kinesso-AdShift-Diversifies-Marketing-Audiences/eda/[DEPRECATED] international_eda/germany/germany_eda.ipynb | UCBerkeley-SCET/DataX-Berkeley |
Fetching Twitter dataThis is a simple notebook containing a simple demonstration on how to fetch tweets using a Twitter Sanbox Environment. The sample data is saved in the form of a json file, which must then be preprocessed. | import os
from os.path import join
from searchtweets import load_credentials, gen_rule_payload, ResultStream, collect_results
import json
project_dir = join(os.getcwd(), os.pardir)
raw_dir = join(project_dir, 'data', 'raw')
twitter_creds_path = join(project_dir, 'twitter_creds.yaml')
search_args = load_credentials(twitter_creds_path, yaml_key='sample_tweets_api')
# this should probably be moved to the configs.yaml
query = "((cyclone amphan) OR amphan)"
##Cyclone amphan
#Formed:16 May 2020
#Dissipated:21 May 2020
from_date="2020-05-14"
to_date="2020-06-15"
# I defined results_per_call as 100 which is default for free users. These can be 500 for paid tiers.
rule = gen_rule_payload(query, results_per_call=100, from_date="2020-05-14", to_date="2020-06-15")
rs = ResultStream(rule_payload=rule,
max_results=200,
**search_args)
fname = f'SAMPLE_DATA_QUERY_{query}_FROMDATE_{from_date}_TODATE_{to_date}.json'
with open(join(raw_dir, fname), 'a', encoding='utf-8') as f:
for tweet in rs.stream():
json.dump(tweet, f)
f.write('\n')
print('done') | done
| MIT | notebooks/1.0-jf-fetching-tweets-example.ipynb | joaopfonseca/solve-iwmi |
Count existing tweets for a given request | search_args = load_credentials(twitter_creds_path, yaml_key='search_tweets_api')
query = "(cyclone amphan)"
count_rule = gen_rule_payload(query, from_date="2020-05-14", to_date="2020-06-15", count_bucket="day", results_per_call=500)
counts = collect_results(count_rule, result_stream_args=search_args)
counts
tweets = 0
for day in counts:
tweets+=day['count']
tweets | _____no_output_____ | MIT | notebooks/1.0-jf-fetching-tweets-example.ipynb | joaopfonseca/solve-iwmi |
https://pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/why : reads image directly as np.array -> to directly image processNeeds : to add steps for saving (1 step for the orig capture and one for the processed image) | # import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
rawCapture = PiRGBArray(camera)
# allow the camera to warmup
time.sleep(0.1)
# grab an image from the camera
camera.capture(rawCapture, format="bgr")
image = rawCapture.array
# display the image on screen and wait for a keypress
cv2.imshow("Image", image)
cv2.waitKey(0)
#save the raw image, as captured
cv2.imwrite("/path to storage folder/img_name_raw.tiff", image)
#process
#save the processed image, ready to show
cv2.imwrite("/path to slideshow folder/img_name.jpg", img) | _____no_output_____ | CC0-1.0 | capture_array.ipynb | trucabrac/Blob-process---tests |
**This notebook is an exercise in the [Python](https://www.kaggle.com/learn/python) course. You can reference the tutorial at [this link](https://www.kaggle.com/colinmorris/hello-python).**--- Welcome to your first set of Python coding problems. If this is your first time using Kaggle Notebooks, welcome! Notebooks are composed of blocks (called "cells") of text and code. Each of these is editable, though you'll mainly be editing the code cells to answer some questions.To get started, try running the code cell below (by pressing the ► button, or clicking on the cell and pressing ctrl+enter on your keyboard). | print("You've successfully run some Python code")
print("Congratulations!") | You've successfully run some Python code
Congratulations!
| MIT | 1 - Python/1 - Python Syntax [exercise-syntax-variables-and-number].ipynb | AkashKumarSingh11032001/Kaggle_Course_Repository |
Try adding another line of code in the cell above and re-running it. Now let's get a little fancier: Add a new code cell by clicking on an existing code cell, hitting the escape key, and then hitting the `a` or `b` key. The `a` key will add a cell above the current cell, and `b` adds a cell below.Great! Now you know how to use Notebooks.Each hands-on exercise starts by setting up our feedback and code checking mechanism. Run the code cell below to do that. Then you'll be ready to move on to question 0. | from learntools.core import binder; binder.bind(globals())
from learntools.python.ex1 import *
print("Setup complete! You're ready to start question 0.") | Setup complete! You're ready to start question 0.
| MIT | 1 - Python/1 - Python Syntax [exercise-syntax-variables-and-number].ipynb | AkashKumarSingh11032001/Kaggle_Course_Repository |
0.*This is a silly question intended as an introduction to the format we use for hands-on exercises throughout all Kaggle courses.***What is your favorite color? **To complete this question, create a variable called `color` in the cell below with an appropriate value. The function call `q0.check()` (which we've already provided in the cell below) will check your answer. | # create a variable called color with an appropriate value on the line below
# (Remember, strings in Python must be enclosed in 'single' or "double" quotes)
color = "blue"
# Check your answer
q0.check() | _____no_output_____ | MIT | 1 - Python/1 - Python Syntax [exercise-syntax-variables-and-number].ipynb | AkashKumarSingh11032001/Kaggle_Course_Repository |
Didn't get the right answer? How do you not even know your own favorite color?!Delete the `` in the line below to make one of the lines run. You can choose between getting a hint or the full answer by choosing which line to remove the `` from. Removing the `` is called uncommenting, because it changes that line from a "comment" which Python doesn't run to code, which Python does run. | # q0.hint()
# q0.solution() | _____no_output_____ | MIT | 1 - Python/1 - Python Syntax [exercise-syntax-variables-and-number].ipynb | AkashKumarSingh11032001/Kaggle_Course_Repository |
The upcoming questions work the same way. The only thing that will change are the question numbers. For the next question, you'll call `q1.check()`, `q1.hint()`, `q1.solution()`, for question 2, you'll call `q2.check()`, and so on. 1.Complete the code below. In case it's helpful, here is the table of available arithmetic operations:| Operator | Name | Description ||--------------|----------------|--------------------------------------------------------|| ``a + b`` | Addition | Sum of ``a`` and ``b`` || ``a - b`` | Subtraction | Difference of ``a`` and ``b`` || ``a * b`` | Multiplication | Product of ``a`` and ``b`` || ``a / b`` | True division | Quotient of ``a`` and ``b`` || ``a // b`` | Floor division | Quotient of ``a`` and ``b``, removing fractional parts || ``a % b`` | Modulus | Integer remainder after division of ``a`` by ``b`` || ``a ** b`` | Exponentiation | ``a`` raised to the power of ``b`` || ``-a`` | Negation | The negative of ``a`` | | pi = 3.14159 # approximate
diameter = 3
# Create a variable called 'radius' equal to half the diameter
radius = diameter/2
# Create a variable called 'area', using the formula for the area of a circle: pi times the radius squared
area = pi * (radius ** 2)
# Check your answer
q1.check()
# Uncomment and run the lines below if you need help.
#q1.hint()
#q1.solution() | _____no_output_____ | MIT | 1 - Python/1 - Python Syntax [exercise-syntax-variables-and-number].ipynb | AkashKumarSingh11032001/Kaggle_Course_Repository |
2.Add code to the following cell to swap variables `a` and `b` (so that `a` refers to the object previously referred to by `b` and vice versa). | ########### Setup code - don't touch this part ######################
# If you're curious, these are examples of lists. We'll talk about
# them in depth a few lessons from now. For now, just know that they're
# yet another type of Python object, like int or float.
a = [1, 2, 3]
b = [3, 2, 1]
q2.store_original_ids()
######################################################################
# Your code goes here. Swap the values to which a and b refer.
# If you get stuck, you can always uncomment one or both of the lines in
# the next cell for a hint, or to peek at the solution.
a,b = b,a
######################################################################
# Check your answer
q2.check()
#q2.hint()
#q2.solution() | _____no_output_____ | MIT | 1 - Python/1 - Python Syntax [exercise-syntax-variables-and-number].ipynb | AkashKumarSingh11032001/Kaggle_Course_Repository |
3a.Add parentheses to the following expression so that it evaluates to 1. | (5 - 3) // 2
#q3.a.hint()
# Check your answer (Run this code cell to receive credit!)
q3.a.solution() | _____no_output_____ | MIT | 1 - Python/1 - Python Syntax [exercise-syntax-variables-and-number].ipynb | AkashKumarSingh11032001/Kaggle_Course_Repository |
3b. 🌶️Questions, like this one, marked a spicy pepper are a bit harder.Add parentheses to the following expression so that it evaluates to 0. | (8 - (3 * 2)) - (1 + 1)
#q3.b.hint()
# Check your answer (Run this code cell to receive credit!)
q3.b.solution() | _____no_output_____ | MIT | 1 - Python/1 - Python Syntax [exercise-syntax-variables-and-number].ipynb | AkashKumarSingh11032001/Kaggle_Course_Repository |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.