markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
When you have merge conflicts after a `git pull`, the notebook file will be broken and won't open in jupyter notebook anymore. This command fixes this by changing the notebook to a proper json file again and add markdown cells to signal the conflict, you just have to open that notebook again and look for `>>>>>>>` to see those conflicts and manually fix them. The old broken file is copied with a `.ipynb.bak` extension, so is still accessible in case the merge wasn't sucessful.Moreover, if `fast=True`, conflicts in outputs and metadata will automatically be fixed by using the local version if `trust_us=True`, the remote one if `trust_us=False`. With this option, it's very likely you won't have anything to do, unless there is a real conflict.
#export def bump_version(version, part=2): version = version.split('.') version[part] = str(int(version[part]) + 1) for i in range(part+1, 3): version[i] = '0' return '.'.join(version) test_eq(bump_version('0.1.1' ), '0.1.2') test_eq(bump_version('0.1.1', 1), '0.2.0') # export @call_parse def nbdev_bump_version(part:Param("Part of version to bump", int)=2): "Increment version in `settings.py` by one" cfg = Config() print(f'Old version: {cfg.version}') cfg.d['version'] = bump_version(Config().version, part) cfg.save() update_version() print(f'New version: {cfg.version}')
_____no_output_____
Apache-2.0
nbs/06_cli.ipynb
maarten990/nbdev
Git hooks
# export import subprocess # export @call_parse def nbdev_install_git_hooks(): "Install git hooks to clean/trust notebooks automatically" path = Config().config_file.parent fn = path/'.git'/'hooks'/'post-merge' #Trust notebooks after merge with open(fn, 'w') as f: f.write("""#!/bin/bash echo "Trusting notebooks" nbdev_trust_nbs """ ) os.chmod(fn, os.stat(fn).st_mode | stat.S_IEXEC) #Clean notebooks on commit/diff with open(path/'.gitconfig', 'w') as f: f.write("""# Generated by nbdev_install_git_hooks # # If you need to disable this instrumentation do: # # git config --local --unset include.path # # To restore the filter # # git config --local include.path .gitconfig # # If you see notebooks not stripped, checked the filters are applied in .gitattributes # [filter "clean-nbs"] clean = nbdev_clean_nbs --read_input_stream True smudge = cat required = true [diff "ipynb"] textconv = nbdev_clean_nbs --disp True --fname """) cmd = "git config --local include.path ../.gitconfig" print(f"Executing: {cmd}") result = subprocess.run(cmd.split(), shell=False, check=False, stderr=subprocess.PIPE) if result.returncode == 0: print("Success: hooks are installed and repo's .gitconfig is now trusted") else: print("Failed to trust repo's .gitconfig") if result.stderr: print(f"Error: {result.stderr.decode('utf-8')}") with open(Config().nbs_path/'.gitattributes', 'w') as f: f.write("""**/*.ipynb filter=clean-nbs **/*.ipynb diff=ipynb """ )
_____no_output_____
Apache-2.0
nbs/06_cli.ipynb
maarten990/nbdev
This command installs git hooks to make sure notebooks are cleaned before you commit them to GitHub and automatically trusted at each merge. To be more specific, this creates:- an executable '.git/hooks/post-merge' file that contains the command `nbdev_trust_nbs`- a `.gitconfig` file that uses `nbev_clean_nbs` has a filter/diff on all notebook files inside `nbs_folder` and a `.gitattributes` file generated in this folder (copy this file in other folders where you might have notebooks you want cleaned as well) Starting a new project
#export _template_git_repo = "https://github.com/fastai/nbdev_template.git" #export @call_parse def nbdev_new(name: Param("A directory to create the project in", str)): "Create a new nbdev project with a given name." path = Path(f"./{name}").absolute() if path.is_dir(): print(f"Directory {path} already exists. Aborting.") return print(f"Creating a new nbdev project {name}.") try: subprocess.run(f"git clone {_template_git_repo} {path}".split(), check=True, timeout=5000) shutil.rmtree(path/".git") subprocess.run("git init".split(), cwd=path, check=True) subprocess.run("git add .".split(), cwd=path, check=True) subprocess.run("git commit -am \"Initial\"".split(), cwd=path, check=True) print(f"Created a new repo for project {name}. Please edit settings.ini and run nbdev_build_lib to get started.") except Exception as e: print("An error occured while copying nbdev project template:") print(e) if os.path.isdir(path): shutil.rmtree(path)
_____no_output_____
Apache-2.0
nbs/06_cli.ipynb
maarten990/nbdev
`nbdev_new` is a command line tool that creates a new nbdev project based on the [nbdev_template repo](https://github.com/fastai/nbdev_template). It'll initialize a new git repository and commit the new project.After you run `nbdev_new`, please edit `settings.ini` and run `nbdev_build_lib`. Export
#hide from nbdev.export import * notebook2script()
Converted 00_export.ipynb. Converted 01_sync.ipynb. Converted 02_showdoc.ipynb. Converted 03_export2html.ipynb. Converted 04_test.ipynb. Converted 05_merge.ipynb. Converted 06_cli.ipynb. Converted 07_clean.ipynb. Converted 99_search.ipynb. Converted index.ipynb. Converted tutorial.ipynb.
Apache-2.0
nbs/06_cli.ipynb
maarten990/nbdev
Lecture 01 : intro, inputs, numpy, pandas 1. Inputs: CSV / Text We will start by ingesting plain text.
from __future__ import print_function import csv my_reader = csv.DictReader(open('data/eu_revolving_loans.csv', 'r'))
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
DicReader returns a "generator" -- which means that we only have 1 chance to read the returning row dictionaries.Let's just print out line by line to see what we are reading in:
for line in my_reader: print(line)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Since the data is tabular format, pandas is ideally suited for such data. There are convenient pandas import functions for reading in tabular data.Pandas provides direct csv ingestion into "data frames":
import pandas as pd df = pd.read_csv('data/eu_revolving_loans.csv') df.head()
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
As we briefly discussed last week, simply reading in without any configuration generates a fairly message data frame. We should try to specify some helping hints to pandas as to where the header rows are and which is the index colum:
df = pd.read_csv('data/eu_revolving_loans.csv', header=[1,2,4], index_col=0) df.head()
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
2. Inputs: Excel Many organizations still use Excel as the common medium for communicating data and analysis. We will look quickly at how to ingest Excel data. There are many packages available to read Excel files. We will use one popular one here.
from __future__ import print_function from openpyxl import load_workbook
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Let's take a look at the excel file that want to read into Jupyter
!open 'data/climate_change_download_0.xlsx'
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Here is how we can read the Excel file into the Jupyter environment.
wb = load_workbook(filename='data/climate_change_download_0.xlsx')
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
What are the "sheets" in this workbook?
wb.get_sheet_names()`
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
We will focus on the sheet 'Data':
ws = wb.get_sheet_by_name('Data')
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
For the sheet "Data", let's print out the content cell-by-cell to view the content.
for row in ws.rows: for cell in row: print(cell.value)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Pandas also provides direct Excel data ingest:
import pandas as pd df = pd.read_excel('data/climate_change_download_0.xlsx') df.head()
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Here is another example with multiple sheets:
df = pd.read_excel('data/GHE_DALY_Global_2000_2012.xls', sheetname='Global2012', header=[4,5])
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
This dataframe has a "multi-level" index:
df.columns
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
How do we export a dataframe back to Excel?
df.to_excel('data/my_excel.xlsx') !open 'data/my_excel.xlsx'
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
3. Inputs: PDF PDF is also a common communication medium about data and analysis. Let's look at how one can read data from PDF into Python.
import pdftables my_pdf = open('data/WEF_GlobalCompetitivenessReport_2014-15.pdf', 'rb') chart_page = pdftables.get_pdf_page(my_pdf, 29)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
PDF is a proprietary file format with specific tagging that has been reverse engineered. Let's take a look at some structures in this file.
table = pdftables.page_to_tables(chart_page) titles = zip(table[0][0], table[0][1])[:5] titles = [''.join([title[0], title[1]]) for title in titles] print(titles)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
There is a table with structured data that we can peel out:
all_rows = [] for row_data in table[0][2:]: all_rows.extend([row_data[:5], row_data[5:]]) print(all_rows)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
4. Configurations
from ConfigParser import ConfigParser config = ConfigParser() config.read('../cfg/sample.cfg') config.sections()
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
5. APIs Getting Twitter data from APIRelevant links to the exercise here:- Twitter Streaming: https://dev/twitter.com/streaming/overview- API client: https://github.com/tweepy/tweepy- Twitter app: https://apps.twitter.com Create an authentication handler
import tweepy auth = tweepy.OAuthHandler(config.get('twitter', 'consumer_key'), config.get('twitter', 'consumer_secret')) auth.set_access_token(config.get('twitter','access_token'), config.get('twitter','access_token_secret')) auth
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Create an API endpoint
api = tweepy.API(auth)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Try REST-ful API call to Twitter
python_tweets = api.search('turkey') for tweet in python_tweets: print(tweet.text)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
For streaming API call, we should run a standalone python program: tweetering.py Input & Output to OpenWeatherMap APIRelevant links to the exercise here:- http://openweathermap.org/- http://openweathermap.org/currentAPI call:```api.openweathermap.org/data/2.5/weather?q={city name}api.openweathermap.org/data/2.5/weather?q={city name},{country code}```Parameters:> q city name and country code divided by comma, use ISO 3166 country codesExamples of API calls:```api.openweathermap.org/data/2.5/weather?q=Londonapi.openweathermap.org/data/2.5/weather?q=London,uk```
from pprint import pprint import requests weather_key = config.get('openweathermap', 'api_key') res = requests.get("http://api.openweathermap.org/data/2.5/weather", params={"q": "San Francisco", "appid": weather_key, "units": "metric"}) pprint(res.json())
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
6. Python requests "requests" is a wonderful HTTP library for Python, with the right level of abstraction to avoid lots of tedious plumbing (manually add query strings to your URLs, or to form-encode your POST data). Keep-alive and HTTP connection pooling are 100% automatic, powered by urllib3, which is embedded within Requests)```>>> r = requests.get('https://api.github.com/user', auth=('user', 'pass'))>>> r.status_code200>>> r.headers['content-type']'application/json; charset=utf8'>>> r.encoding'utf-8'>>> r.textu'{"type":"User"...'>>> r.json(){u'private_gists': 419, u'total_private_repos': 77, ...}```There is a lot of great documentation at the python-requests [site](http://docs.python-requests.org/en/master/) -- we are extracting selected highlights from there for your convenience here. Making a requestMaking a request with Requests is very simple.Begin by importing the Requests module:
import requests
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Now, let's try to get a webpage. For this example, let's get GitHub's public timeline
r = requests.get('https://api.github.com/events')
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Now, we have a Response object called r. We can get all the information we need from this object.Requests' simple API means that all forms of HTTP request are as obvious. For example, this is how you make an HTTP POST request:
r = requests.post('http://httpbin.org/post', data = {'key':'value'})
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
What about the other HTTP request types: PUT, DELETE, HEAD and OPTIONS? These are all just as simple:
r = requests.put('http://httpbin.org/put', data = {'key':'value'}) r = requests.delete('http://httpbin.org/delete') r = requests.head('http://httpbin.org/get') r = requests.options('http://httpbin.org/get')
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Passing Parameters In URLsYou often want to send some sort of data in the URL's query string. If you were constructing the URL by hand, this data would be given as key/value pairs in the URL after a question mark, e.g. httpbin.org/get?key=val. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. As an example, if you wanted to pass key1=value1 and key2=value2 to httpbin.org/get, you would use the following code:
payload = {'key1': 'value1', 'key2': 'value2'} r = requests.get('http://httpbin.org/get', params=payload)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
You can see that the URL has been correctly encoded by printing the URL:
print(r.url)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Note that any dictionary key whose value is None will not be added to the URL's query string.You can also pass a list of items as a value:
payload = {'key1': 'value1', 'key2': ['value2', 'value3']} r = requests.get('http://httpbin.org/get', params=payload) print(r.url)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Response ContentWe can read the content of the server's response. Consider the GitHub timeline again:
import requests r = requests.get('https://api.github.com/events') r.text
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Requests will automatically decode content from the server. Most unicode charsets are seamlessly decoded.When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property:
r.encoding r.encoding = 'ISO-8859-1'
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
If you change the encoding, Requests will use the new value of r.encoding whenever you call r.text. You might want to do this in any situation where you can apply special logic to work out what the encoding of the content will be. For example, HTTP and XML have the ability to specify their encoding in their body. In situations like this, you should use r.content to find the encoding, and then set r.encoding. This will let you use r.text with the correct encoding.Requests will also use custom encodings in the event that you need them. If you have created your own encoding and registered it with the codecs module, you can simply use the codec name as the value of r.encoding and Requests will handle the decoding for you. JSON Response ContentThere's also a builtin JSON decoder, in case you're dealing with JSON data:
import requests r = requests.get('https://api.github.com/events') r.json()
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
In case the JSON decoding fails, r.json raises an exception. For example, if the response gets a 204 (No Content), or if the response contains invalid JSON, attempting r.json raises ValueError: No JSON object could be decoded.It should be noted that the success of the call to r.json does not indicate the success of the response. Some servers may return a JSON object in a failed response (e.g. error details with HTTP 500). Such JSON will be decoded and returned. To check that a request is successful, use r.raise_for_status() or check r.status_code is what you expect.
r.status_code
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Custom HeadersIf you'd like to add HTTP headers to a request, simply pass in a dict to the headers parameter.For example, we didn't specify our user-agent in the previous example:
url = 'https://api.github.com/some/endpoint' headers = {'user-agent': 'my-app/0.0.1'} r = requests.get(url, headers=headers)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Note: Custom headers are given less precedence than more specific sources of information. For instance:- Authorization headers set with headers= will be overridden if credentials are specified in .netrc, which in turn will be overridden by the auth= parameter.- Authorization headers will be removed if you get redirected off-host.- Proxy-Authorization headers will be overridden by proxy credentials provided in the URL.- Content-Length headers will be overridden when we can determine the length of the content. Response HeadersWe can view the server's response headers using a Python dictionary:
r.headers
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
The dictionary is special, though: it's made just for HTTP headers. According to RFC 7230, HTTP Header names are case-insensitive.So, we can access the headers using any capitalization we want:
r.headers['Content-Type'] r.headers.get('content-type')
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
CookiesIf a response contains some Cookies, you can quickly access them:
url = 'http://www.cnn.com' r = requests.get(url) print(r.cookies.items())
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
To send your own cookies to the server, you can use the cookies parameter:
url = 'http://httpbin.org/cookies' cookies = dict(cookies_are='working') r = requests.get(url, cookies=cookies) r.text
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
Redirection and HistoryBy default Requests will perform location redirection for all verbs except HEAD.We can use the history property of the Response object to track redirection.The Response.history list contains the Response objects that were created in order to complete the request. The list is sorted from the oldest to the most recent response.For example, GitHub redirects all HTTP requests to HTTPS:
r = requests.get('http://github.com') r.url r.status_code r.history
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
If you're using GET, OPTIONS, POST, PUT, PATCH or DELETE, you can disable redirection handling with the allow_redirects parameter:
r = requests.get('http://github.com', allow_redirects=False) r.status_code r.history
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
If you're using HEAD, you can enable redirection as well:
r = requests.head('http://github.com', allow_redirects=True) r.url r.history
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
TimeoutsYou can tell Requests to stop waiting for a response after a given number of seconds with the timeout parameter:
requests.get('http://github.com', timeout=1)
_____no_output_____
MIT
lecture02.ingestion/lecture02.ingestion.ipynb
philmui/algorithmic-bias-2019
IPL_Sport_AnalysisTask - 3 Introduction The Indian Premier League (IPL),is a professional Twenty20 cricket league in India contested during April and May of every year by teams representing Indian cities. The league was founded by the Board of Control for Cricket in India (BCCI) in 2007. The IPL is the most-attended cricket league in the world and ranks sixth among all sports league. The data consists of two datasets : matches and deliveries. matches dataset contains data of all IPL matches.deliveries dataset contains ball by ball data of each IPL match. Objective Aim is to provide some interesting insights by analyzing the IPL data.
## Importing Required Libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') %matplotlib inline ## function to add data to plot def annot_plot(ax,w,h): ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) for p in ax.patches: ax.annotate('{}'.format(p.get_height()), (p.get_x()+w, p.get_height()+h))
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Reading Data
match_data=pd.read_csv('matches.csv') deliveries_data=pd.read_csv('deliveries.csv') season_data=match_data[['id','season','winner']] complete_data=deliveries_data.merge(season_data,how='inner',left_on='match_id',right_on='id') match_data.head() match_data['win_by']=np.where(match_data['win_by_runs']>0,'Bat first','Bowl first') match_data.shape deliveries_data.head(5) deliveries_data['runs']=deliveries_data['total_runs'].cumsum() deliveries_data.shape
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Number of Matches played in each IPL season
ax=sns.countplot('season',data=match_data,palette="Set2") plt.ylabel('Matches') annot_plot(ax,0.08,1)
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Matches Won By the Teams Mumbai Indians won maximum number of matches followed by Chennai Super Kings.
match_data.groupby('winner')['winner'].agg( ['count']).sort_values('count').reset_index().plot(x='winner',y='count',kind='barh') ax=sns.countplot(x='winner',data=match_data) plt.ylabel('Match') plt.xticks(rotation=80) annot_plot(ax,0.05,1)
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Win Percentage
match=match_data.win_by.value_counts() labels=np.array(match.index) sizes = match.values colors = ['gold', 'lightskyblue'] # Plot plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True,startangle=90) plt.title('Match Result') plt.axis('equal') plt.show() sns.countplot('season',hue='win_by',data=match_data,palette="Set1")
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Toss Decisions so far
toss=match_data.toss_decision.value_counts() labels=np.array(toss.index) sizes = toss.values colors = ['red', 'gold'] #explode = (0.1, 0, 0, 0) # explode 1st slice # Plot plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True,startangle=90) plt.title('Toss Result') plt.axis('equal') plt.show() sns.countplot('season',hue='toss_decision',data=match_data,palette="Set2")
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
IPL Winners
final_matches=match_data.drop_duplicates(subset=['season'], keep='last') final_matches[['season','winner']].reset_index(drop=True).sort_values('season')
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
IPL Finals IPL Finals venues and winners along with the number of wins.
final_matches.groupby(['city','winner']).size()
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Number of IPL seasons won by teams
final_matches['winner'].value_counts()
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Win Percentage in Finals
match=final_matches.win_by.value_counts() labels=np.array(match.index) sizes = match.values colors = ['gold', 'lightskyblue'] # Plot plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True,startangle=90) plt.title('Match Result') plt.axis('equal') plt.show()
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Toss Decision in Finals
toss=final_matches.toss_decision.value_counts() labels=np.array(toss.index) sizes = toss.values colors = ['gold', 'lightskyblue'] #explode = (0.1, 0, 0, 0) # explode 1st slice # Plot plt.pie(sizes, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True,startangle=90) plt.title('Toss Result') plt.axis('equal') plt.show() final_matches[['toss_winner','toss_decision','winner']].reset_index(drop=True)
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Man of the Match in final match
final_matches[['winner','player_of_match']].reset_index(drop=True) len(final_matches[final_matches['toss_winner']==final_matches['winner']]['winner'])
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
IPL leading Run Scorer¶Suresh Raina is at the top with 4548 Runs. There are 3 foreign players in this list. Among them Chris Gayle is the leading run scorer.
batsman_score=deliveries_data.groupby('batsman')['batsman_runs']. agg(['sum']).reset_index().sort_values('sum',ascending=False).reset_index(drop=True) batsman_score=batsman_score.rename(columns={'sum':'batsman_runs'}) print("*** Top 10 Leading Run Scorer in IPL ***") batsman_score.iloc[:10,:] No_Matches_player_dismissed = deliveries_data[["match_id","player_dismissed"]] No_Matches_player_dismissed =No_Matches_player_dismissed .groupby("player_dismissed")["match_id"].count().reset_index().sort_values(by="match_id",ascending=False).reset_index(drop=True) No_Matches_player_dismissed.columns=["batsman","No_of Matches"] No_Matches_player_dismissed .head(5)
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Batting Average
Batsman_Average=pd.merge(batsman_score,No_Matches_player_dismissed ,on="batsman") #merging the score and match played by batsman Batsman_Average=Batsman_Average[Batsman_Average["batsman_runs"]>=500] # taking Average for those player for having more than 500 runs under thier belt Batsman_Average["Average"]=Batsman_Average["batsman_runs"]/Batsman_Average["No_of Matches"] Batsman_Average['Average']=Batsman_Average['Average'].apply(lambda x: round(x,2)) Batsman_Average=Batsman_Average.sort_values(by="Average",ascending=False).reset_index(drop=True) top_bat_avg=Batsman_Average.iloc[:10,:] ax=top_bat_avg.plot('batsman','Average',color='green',kind='bar') plt.ylabel('Average') plt.xticks(rotation=80) annot_plot(ax,0,1)
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Amla is at the top of this list with the batting average of 44.38. Dismissals in IPL
plt.figure(figsize=(12,6)) ax=sns.countplot(deliveries_data.dismissal_kind) plt.xticks(rotation=90) annot_plot(ax,0.2,100)
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Dismissal by Teams and their distribution
out=deliveries_data.groupby(['batting_team','dismissal_kind'])['dismissal_kind'].agg(['count']) out.groupby(level=0).apply(lambda x: round(100 * x / float(x.sum()),2)).reset_index().sort_values(['batting_team','count'],ascending=[1,0]).set_index(['batting_team','dismissal_kind']) wicket_data=deliveries_data.dropna(subset=['dismissal_kind']) wicket_data=wicket_data[~wicket_data['dismissal_kind'].isin(['run out','retired hurt','obstructing the field'])]
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
IPL Most Wicket-Taking Bowlers
wicket_data.groupby('bowler')['dismissal_kind'].agg(['count']).reset_index().sort_values('count',ascending=False).reset_index(drop=True).iloc[:10,:]
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Malinga is at the top of this list with 170 wickets Powerplays In IPL the Powerplay consists of first 6 overs.During the first six overs, a maximum of two fielders can be outside the 30-yard circle.
powerplay_data=complete_data[complete_data['over']<=6]
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Runs in Powerplays
powerplay_data[ powerplay_data['inning']==1].groupby('match_id')['total_runs'].agg(['sum']).reset_index().plot(x='match_id',y='sum',title='Batting First') powerplay_data[ powerplay_data['inning']==2].groupby('match_id')['total_runs'].agg(['sum']).reset_index().plot(x='match_id',y='sum',title='Batting Second')
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Higgest Runs in PowerPlays
powerplay_data.groupby(['season','match_id','inning'])['total_runs'].agg(['sum']).reset_index().groupby('season')['sum'].max()
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Highest Runs in Powerplay :Batting First
pi1=powerplay_data[ powerplay_data['inning']==1].groupby(['season','match_id'])['total_runs'].agg(['sum']) pi1.reset_index().groupby('season')['sum'].max()
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Highest Runs in Powerplay :Batting Second
pi2=powerplay_data[ powerplay_data['inning']==2].groupby(['season','match_id'])['total_runs'].agg(['sum']) pi2.reset_index().groupby('season')['sum'].max()
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Maximum Wickets Fall in PowerPlay
powerplay_data.dropna(subset=['dismissal_kind']).groupby(['season','match_id','inning'])['dismissal_kind'].agg(['count']).reset_index().groupby('season')['count'].max()
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
First Innings
powerplay_data[ powerplay_data['inning']==1].dropna( subset=['dismissal_kind']).groupby( ['season','match_id','inning'])['dismissal_kind'].agg(['count']).reset_index().groupby('season')['count'].max()
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
Second Innings
powerplay_data[ powerplay_data['inning']==2].dropna( subset=['dismissal_kind']).groupby(['season','match_id','inning'])['dismissal_kind'].agg( ['count']).reset_index().groupby('season')['count'].max()
_____no_output_____
MIT
EDA_IPL_sport.ipynb
mohit421/EDA_IPL-The_Sparks_foundation
HIERARCHICAL CLUSTERING**File:** hierarchical.ipynb**Course:** Data Science Foundations: Data Mining in Python IMPORT LIBRARIES
import pandas as pd # For dataframes import matplotlib.pyplot as plt # For plotting data import seaborn as sns # For plotting data from sklearn.cluster import AgglomerativeClustering # For clustering from scipy.cluster.hierarchy import dendrogram, linkage # For clustering and visualization
_____no_output_____
Apache-2.0
Hierarchical.ipynb
VladimirsHisamutdinovs/data-mining
LOAD AND PREPARE DATARead the `penguins.csv` file from the `data` directory into variable `df`. Select a random sample of 75 cases of the dataset for easy visualization. Keep all features in variable `df` and store the class variable in `y`.
# Reads the .csv file into variable df df = pd.read_csv('data/penguins.csv') # Selects a random sample of 75 cases df = df.sample(n=75, random_state=1) # Separates the class variable in y y = df.y # Removes the y column from df df = df.drop('y', axis=1) # Displays the first 5 rows of df df.head()
_____no_output_____
Apache-2.0
Hierarchical.ipynb
VladimirsHisamutdinovs/data-mining
HIERARCHICAL CLUSTERING In this demonstration, we'll use `SciPy` to perform hierarchical clustering. (Another common choice is `scikit-learn`.)The `scipy.cluster.hierarchy` package contains two functions, i.e., `linkage()` and `dendogram()` for hierarchical clustering. The `linkage()` function performs agglomerative clustering and the `dendogram()` function displays the clusters. Various `linkage` methods are possible. Here we'll use the `ward` linkage method that merges clusters so that variance of the clusters is minimized. Other linkage options are:- `average`- `single` - `complete` The `linkage()` function returns a linkage matrix with information about clusters. This matrix can be viewed using the `dendogram()` function. The code below performs clustering using the `euclidean` metric and displays the clusters.
# Performs agglomerative clustering using `ward` linkage and `euclidean` metric hc = linkage(df, method='ward', metric='euclidean') # Sets the figure size fig = plt.figure(figsize=(15, 15)) # Displays the dendogram # The lambda function sets the labels of each leaf dn = dendrogram( hc, leaf_label_func=lambda id: y.values[id], leaf_font_size=10)
_____no_output_____
Apache-2.0
Hierarchical.ipynb
VladimirsHisamutdinovs/data-mining
D1: Determine the Summary Statistics for June
# 1. Import the sqlalchemy extract function. from sqlalchemy import extract # 2. Write a query that filters the Measurement table to retrieve the temperatures for the month of June. june_temps = session.query(Measurement.date, Measurement.tobs).\ filter(func.strftime("%m", Measurement.date) == "06") # 3. Convert the June temperatures to a list. june_temps = session.query(Measurement.date, Measurement.tobs).\ filter(func.strftime("%m", Measurement.date) == "06").all() june_temps # 4. Create a DataFrame from the list of temperatures for the month of June. df = pd.DataFrame(june_temps, columns=['date', 'June temperature']) # 5. Calculate and print out the summary statistics for the June temperature DataFrame. df.describe() #Calculate precipitation for June and put into list june_prcp = session.query(Measurement.date, Measurement.prcp).\ filter(func.strftime("%m", Measurement.date) == "06").all() df = pd.DataFrame(june_prcp, columns=['date', 'June precipitation']) df.describe()
_____no_output_____
MIT
SurfsUp_Challenge.ipynb
jenv5507/surfs_up
D2: Determine the Summary Statistics for December
# 6. Write a query that filters the Measurement table to retrieve the temperatures for the month of December. dec_temps = session.query(Measurement.date, Measurement.tobs).\ filter(func.strftime("%m", Measurement.date) == "12") # 7. Convert the December temperatures to a list. dec_temps = session.query(Measurement.date, Measurement.tobs).\ filter(func.strftime("%m", Measurement.date) == "12").all() # 8. Create a DataFrame from the list of temperatures for the month of December. df = pd.DataFrame(dec_temps, columns=['date', 'December temperature']) # 9. Calculate and print out the summary statistics for the Decemeber temperature DataFrame. df.describe() #Calculate precipitation for December and put into list dec_prcp = session.query(Measurement.date, Measurement.prcp).\ filter(func.strftime("%m", Measurement.date) == "12").all() #Create a DataFrame for December precipitation df = pd.DataFrame(dec_prcp, columns=['date', 'December precipitation']) df.describe()
_____no_output_____
MIT
SurfsUp_Challenge.ipynb
jenv5507/surfs_up
Simulation of diffusion with noise Package imports
import numpy as np import numpy.random as npr from scipy.signal import convolve2d import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation from IPython.display import HTML
_____no_output_____
MIT
diffusion_simple.ipynb
BenGravell/gridtoy
Settings
# Number of timesteps to simulate duration = 500 # Dimension of simulation domain n = 100 # Sampling time dt = 0.1 # Initialize the state and noise state = npr.rand(n, n) noise = np.zeros([n, n]) # Define the diffusion rate - ensure rate*dt < 0.25 for numerical stability using Euler integration rate = 2.0 # Define the force and noise amount force_amount = 0.005 noise_amount = 0.040 # Define the force frequency force_freq = 0.001 # Define the noise inertia (between 0 and 1, 0 is fully white noise, 1 is a constant signal) noise_inertia = 0.9
_____no_output_____
MIT
diffusion_simple.ipynb
BenGravell/gridtoy
Simulation
# Compute the convolution kernel for diffusion dynamics diffusion_kernel = np.array([[ 0, rate, 0], [rate, -4*rate, rate], [ 0, rate, 0]]) # Compute the force kernel s = np.linspace(-1, 1, n) x, y = np.meshgrid(s, s) force_kernel = x**2 + y**2 < 0.2 def physics_update(state, noise, t): # Linear diffusion dynamics using Euler integration state = state + dt*convolve2d(state, diffusion_kernel, mode='same', boundary='wrap') # Periodic forcing amplitude = np.sin(force_freq*2*np.pi*t)**21 force = amplitude*force_kernel state += force_amount*force # Random time-varying Gaussian colored noise noise = (1-noise_inertia)*npr.randn(*noise.shape) + noise_inertia*noise state += noise_amount*noise return state, noise
_____no_output_____
MIT
diffusion_simple.ipynb
BenGravell/gridtoy
Plotting
# Initialize the plot plt.ioff() fig, ax = plt.subplots() im = plt.imshow(state, vmin=0, vmax=1) ax.axis('off') fig.tight_layout() def update(t): global state, noise state, noise = physics_update(state, noise, t) im.set_data(state) return [im] # Create the animation animation = FuncAnimation(fig, update, frames=duration, interval=20, blit=True) HTML(animation.to_html5_video()) # simple video # HTML(animation.to_jshtml()) # interactive video player
_____no_output_____
MIT
diffusion_simple.ipynb
BenGravell/gridtoy
waimai_10k 说明0. **下载地址:** [Github](https://github.com/SophonPlus/ChineseNlpCorpus/raw/master/datasets/waimai_10k/waimai_10k.csv)1. **数据概览:** 某外卖平台收集的用户评价,正向 4000 条,负向 约 8000 条2. **推荐实验:** 情感/观点/评论 倾向性分析2. **数据来源:** 某外卖平台3. **原数据集:** [中文短文本情感分析语料 外卖评价](https://download.csdn.net/download/cstkl/10236683),网上搜集,具体作者、来源不详4. **加工处理:** 1. 将原来 2 个文件整合到 1 个文件中 2. 去重
import pandas as pd path = 'waimai_10k_文件夹_所在_路径'
_____no_output_____
MIT
resources/nlp/ChineseNlpCorpus/datasets/waimai_10k/intro.ipynb
aicanhelp/ai-datasets
1. waimai_10k.csv 加载数据
pd_all = pd.read_csv(path + 'waimai_10k.csv') print('评论数目(总体):%d' % pd_all.shape[0]) print('评论数目(正向):%d' % pd_all[pd_all.label==1].shape[0]) print('评论数目(负向):%d' % pd_all[pd_all.label==0].shape[0])
评论数目(总体):11987 评论数目(正向):4000 评论数目(负向):7987
MIT
resources/nlp/ChineseNlpCorpus/datasets/waimai_10k/intro.ipynb
aicanhelp/ai-datasets
字段说明| 字段 | 说明 || ---- | ---- || label | 1 表示正向评论,0 表示负向评论 || review | 评论内容 |
pd_all.sample(20)
_____no_output_____
MIT
resources/nlp/ChineseNlpCorpus/datasets/waimai_10k/intro.ipynb
aicanhelp/ai-datasets
2. 构造平衡语料
pd_positive = pd_all[pd_all.label==1] pd_negative = pd_all[pd_all.label==0] def get_balance_corpus(corpus_size, corpus_pos, corpus_neg): sample_size = corpus_size // 2 pd_corpus_balance = pd.concat([corpus_pos.sample(sample_size, replace=corpus_pos.shape[0]<sample_size), \ corpus_neg.sample(sample_size, replace=corpus_neg.shape[0]<sample_size)]) print('评论数目(总体):%d' % pd_corpus_balance.shape[0]) print('评论数目(正向):%d' % pd_corpus_balance[pd_corpus_balance.label==1].shape[0]) print('评论数目(负向):%d' % pd_corpus_balance[pd_corpus_balance.label==0].shape[0]) return pd_corpus_balance waimai_10k_ba_4000 = get_balance_corpus(4000, pd_positive, pd_negative) waimai_10k_ba_4000.sample(10)
评论数目(总体):4000 评论数目(正向):2000 评论数目(负向):2000
MIT
resources/nlp/ChineseNlpCorpus/datasets/waimai_10k/intro.ipynb
aicanhelp/ai-datasets
**Exercice sur les populations municipales et totales par département** À partir des datasets population_communes.csv et surface_departements.csv, créer un nouveau dataset qui contient une ligne par département, avec ces colonnes:\- la somme des "Population municipale" du département\- la somme des "Population totale" du département (pour l'explication de la distinction entre "Population municipale" et "Population totale", voir: https://www.insee.fr/fr/metadonnees/definition/c1270)\- la part (en pourcentage) de la population municipale par rapport à la population totale\- la part (en pourcentage) de la population (municipale) du département au sein de sa région\- la densité de la population (municipale) en nb d'habitants / km2\(Le dataset final devrait ressembler à result-exo-cc.csv) **import and some data on the data sets**
import pandas as pd communes = pd.read_csv('population_communes.csv') surfaces = pd.read_csv('surface_departements.csv') communes.head(5).reset_index() communes.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 70764 entries, 0 to 70763 Data columns (total 9 columns): Code département 70764 non-null object Code canton 70724 non-null float64 Code arrondissement 70764 non-null int64 Code région 70764 non-null int64 Nom de la commune 70764 non-null object Code commune 70764 non-null int64 Nom de la région 70764 non-null object variable 70764 non-null object value 70764 non-null int64 dtypes: float64(1), int64(4), object(4) memory usage: 4.9+ MB
MIT
Exo6PopulationDepartement.ipynb
ms2020bgd/ErwanFloch
**Somme des variables "Population municipale" du département**
municipale = communes[communes['variable'] == 'Population municipale'] municipale = municipale.groupby(['Code région', 'Code département', 'variable']).sum().reset_index() municipale = municipale.rename(columns = {'value':'Population municipale', 'Code département': 'Département'}) municipale[['Code région', 'Département', 'Population municipale']]
_____no_output_____
MIT
Exo6PopulationDepartement.ipynb
ms2020bgd/ErwanFloch
**Somme des variables "Population totale" du département**
totale = communes[communes.variable == 'Population totale'] totale.head(3) totale = totale.groupby(['Code région', 'Code département', 'variable']).sum().reset_index() totale = totale.rename(columns = {'value':'Population totale', 'Code département': 'Département'}) totale[['Code région', 'Département', 'Population totale']]
_____no_output_____
MIT
Exo6PopulationDepartement.ipynb
ms2020bgd/ErwanFloch
**Part (en pourcentage) de la population municipale par rapport à la population totale**
municipale['pourcentage municipale / totale']=100 * municipale['Population municipale'] / totale['Population totale'] municipale[['Code région', 'Département', 'pourcentage municipale / totale']]
_____no_output_____
MIT
Exo6PopulationDepartement.ipynb
ms2020bgd/ErwanFloch
**Part (en pourcentage) de la population (municipale) du département au sein de sa région**
population_région = municipale.groupby('Code région').transform('sum')['Population municipale'] municipale['% pop. municipale / région'] = 100*municipale['Population municipale'] / population_région municipale[['Code région', 'Département', '% pop. municipale / région']]
_____no_output_____
MIT
Exo6PopulationDepartement.ipynb
ms2020bgd/ErwanFloch
**Densité de la population (municipale) en nb d'habitants / km2**
surfaces.info() surfaces surfaces = surfaces.rename(columns = {'code_insee' : 'Département'}) surfaces.head(5) df = municipale.merge(surfaces, left_on='Département', right_on = 'Département') df df['densité'] = df['Population municipale'] / df['surf_km2'] df[['Code région', 'Département', 'densité']] dfres = pd.read_csv('result-exo-cc.csv') dfres
_____no_output_____
MIT
Exo6PopulationDepartement.ipynb
ms2020bgd/ErwanFloch
Text Analysis IntroductionEvery now and then you may want to analyze text that you have mined or even written yourself. For example, you may want to see (out of curiousity) what word occurs the most in a body of text.In this notebook, we are going to analyze a well-cited astrochemistry article, titled "Rotational Excitation of CO by Collisions with He, H, and H$_2$ Under Conditions in Interstellar Clouds" by Green and Thaddeus, published in _The Astrophysical Journal_ in 1976.Normally you would have to mine the text out of a PDF - I've already done this step for you (albeit poorly). The data is located in the `data` directory.To make sure the comparison is consistent throughout the analysis, we have to make sure we remove as much of the special characters and lower/upper casing as possible. AimThe objective in this notebook is to open the text file in Python, parse out every word, and generate a histogram of word occurances. The scope will be to pick up all of the words __longer than 5 characters__, and count the number of times they appear.__Note that your partner will have to perform the same analysis on a different text! Make sure your code is clean and well documented!__These are the steps you need to take:1. Open the text file for reading2. Remove special characters from the text and remove case-sensitivity - I recommend replacing special characters with spaces!3. Loop through the words, and incrementing each time you find the same word again.4. Histogram count the words - This can be done with the `Counter` function from `collections`, or with `pandas DataFrame` built-in methods.5. Plot up the histogram with `matplotlib`This is the preamble you probably need:
%matplotlib inline # This function will count the occurances in a list from collections import Counter # For your histogram needs import numpy as np # Optional, if you're courageous! import pandas as pd # For the plotting from matplotlib import pyplot as plt
_____no_output_____
MIT
2_Text Analysis.ipynb
laserkelvin/PythonExercises
Method1. Open a text file and read in its contents in a "Pythonic" way.
with open("data/GreenThaddeus-1976-ApJ.txt") as read_file: lines = read_file.read()
_____no_output_____
MIT
2_Text Analysis.ipynb
laserkelvin/PythonExercises
2. Clean up the text so that it's more easily processed, i.e. removing newline characters and other special characters
for character in ["\n", ",", ".", """(""", """)""", """:""", """*"""]: lines = lines.replace(character, " ")
_____no_output_____
MIT
2_Text Analysis.ipynb
laserkelvin/PythonExercises
3. I chose to store the data in a Python dictionary, where the key corresponds to the word, and the value is the count.
word_dict = dict() for word in lines.split(" "): # If there are more than 5 characters, and is not a number then we count if len(word) > 5 and word.isdigit() is False: # If the word is not already in the dictionary, add it in if word.lower() not in word_dict: word_dict[word.lower()] = 1 # Otherwise just increment the counter else: word_dict[word.lower()]+=1
_____no_output_____
MIT
2_Text Analysis.ipynb
laserkelvin/PythonExercises
4. The way I chose to analyze the data was to use `pandas`. You can easily convert the dictionary into a `pandas` `DataFrame`, which handles in a SQL-like fashion. I've oriented the `DataFrame` such that the words are in the index, and column 0 is the occurance.
df = pd.DataFrame.from_dict(word_dict, orient="index")
_____no_output_____
MIT
2_Text Analysis.ipynb
laserkelvin/PythonExercises
5. The values are sorted in descending order, and in place (so nothing is returned from the function call)
df.sort_values([0], ascending=False, inplace=True)
_____no_output_____
MIT
2_Text Analysis.ipynb
laserkelvin/PythonExercises
6. Since I didn't want to swamp the figure, I am plotting only the top 10 occurances of a word. The `iloc` method will let you slice/select indices of a dataframe. I code below simply chooses the first 10 values of a dataframe.
cut_df = df.iloc[:10] plt.style.use("seaborn") fig, ax = plt.subplots(figsize=(10,6)) ax.bar(cut_df.index, cut_df[0]) ax.set_title("Top 10 words in Green & Thaddeus, 1976") fig.savefig("figures/Green1976-top10.png", dpi=300)
_____no_output_____
MIT
2_Text Analysis.ipynb
laserkelvin/PythonExercises
# @title Copyright 2020 The ALBERT Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ==============================================================================
_____no_output_____
Apache-2.0
albert_finetune_optimization.ipynb
luanps/albert
ALBERT End to End (Fine-tuning + Predicting) with Cloud TPU OverviewALBERT is "A Lite" version of BERT, a popular unsupervised language representation learning algorithm. ALBERT uses parameter-reduction techniques that allow for large-scale configurations, overcome previous memory limitations, and achieve better behavior with respect to model degradation.For a technical description of the algorithm, see our paper:https://arxiv.org/abs/1909.11942Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu SoricutThis Colab demonstates using a free Colab Cloud TPU to fine-tune GLUE tasks built on top of pretrained ALBERT models and run predictions on tuned model. The colab demonsrates loading pretrained ALBERT models from both [TF Hub](https://www.tensorflow.org/hub) and checkpoints.**Note:** You will need a GCP (Google Compute Engine) account and a GCS (Google Cloud Storage) bucket for this Colab to run.Please follow the [Google Cloud TPU quickstart](https://cloud.google.com/tpu/docs/quickstart) for how to create GCP account and GCS bucket. You have [$300 free credit](https://cloud.google.com/free/) to get started with any GCP product. You can learn more about Cloud TPU at https://cloud.google.com/tpu/docs.This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select **File > View on GitHub**. Instructions &nbsp;&nbsp;Train on TPU 1. Create a Cloud Storage bucket for your TensorBoard logs at http://console.cloud.google.com/storage and fill in the BUCKET parameter in the "Parameters" section below. 1. On the main menu, click Runtime and select **Change runtime type**. Set "TPU" as the hardware accelerator. 1. Click Runtime again and select **Runtime > Run All** (Watch out: the "Colab-only auth for this notebook and the TPU" cell requires user input). You can also run the cells manually with Shift-ENTER. Set up your TPU environmentIn this section, you perform the following tasks:* Set up a Colab TPU running environment* Verify that you are connected to a TPU device* Upload your credentials to TPU to access your GCS bucket.
# TODO(lanzhzh): Add support for 2.x. %tensorflow_version 1.x import os import pprint import json import tensorflow as tf assert "COLAB_TPU_ADDR" in os.environ, "ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!" TPU_ADDRESS = "grpc://" + os.environ["COLAB_TPU_ADDR"] TPU_TOPOLOGY = "2x2" print("TPU address is", TPU_ADDRESS) from google.colab import auth auth.authenticate_user() with tf.Session(TPU_ADDRESS) as session: print('TPU devices:') pprint.pprint(session.list_devices()) # Upload credentials to TPU. with open('/content/adc.json', 'r') as f: auth_info = json.load(f) tf.contrib.cloud.configure_gcs(session, credentials=auth_info) # Now credentials are set for all future sessions on this TPU.
TensorFlow 1.x selected. TPU address is grpc://10.109.125.66:8470 WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. TPU devices: [_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:CPU:0, CPU, -1, 2409099261969407911), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 1549954337002144741), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 5510839357321454835), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 873393571816079649), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 9117514880373904260), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 12704941682957268373), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 10623130967391006998), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 17893873024629234993), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 9214549767924212172), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 6427061617775819593), _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 8589934592, 2138631231408532535)]
Apache-2.0
albert_finetune_optimization.ipynb
luanps/albert
Prepare and import ALBERT modules​With your environment configured, you can now prepare and import the ALBERT modules. The following step clones the source code from GitHub.
import sys !test -d albert || git clone https://github.com/google-research/albert albert if not 'albert' in sys.path: sys.path += ['albert'] !pip install sentencepiece
Cloning into 'albert'... remote: Enumerating objects: 367, done. remote: Counting objects: 100% (14/14), done. remote: Compressing objects: 100% (11/11), done. remote: Total 367 (delta 5), reused 6 (delta 3), pack-reused 353 Receiving objects: 100% (367/367), 262.46 KiB | 3.50 MiB/s, done. Resolving deltas: 100% (237/237), done. Collecting sentencepiece Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)  |████████████████████████████████| 1.2 MB 5.2 MB/s [?25hInstalling collected packages: sentencepiece Successfully installed sentencepiece-0.1.96
Apache-2.0
albert_finetune_optimization.ipynb
luanps/albert