markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Now let's plot the small test graph:
plot(testgraph)
_____no_output_____
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Axis Bank Stock Data Analysis Project Blog Post> Data Analysis of axis bank stock market time-series dataset.- toc: true - badges: true- comments: true- categories: [jupyter]- image: images/stockdataimg.jpg AxisBank Stock Data AnalysisThe project is based on the dataset I obtained from kaggle. The Analysis I am performing is on the 'AXISBANK' stock market data from 2019-2021.AXISBANK is one of the stocks listed in NIFTY50 index. The NIFTY 50 is a benchmark Indian stock market index that represents the weighted average of 50 of the largest Indian companies listed on the National Stock Exchange. It is one of the two main stock indices used in India, the other being the BSE SENSEX. The Analysis is performed on the stock quote data of "AXIS BANK" from the dataset of NIFTY50 Stock Market data obtained from kaggle repo. Axis Bank Limited, formerly known as UTI Bank (1993–2007), is an Indian banking and financial services company headquartered in Mumbai, Maharashtra.It sells financial services to large and mid-size companies, SMEs and retail businesses.The bank was founded on 3 December 1993 as UTI Bank, opening its registered office in Ahmedabad and a corporate office in Mumbai. The bank was promoted jointly by the Administrator of the Unit Trust of India (UTI), Life Insurance Corporation of India (LIC), General Insurance Corporation, National Insurance Company, The New India Assurance Company, The Oriental Insurance Corporation and United India Insurance Company. The first branch was inaugurated on 2 April 1994 in Ahmedabad by Manmohan Singh, then finance minister of India \I chose this dataset because of the importance of NIFTY50 listed stocks on Indian economy. In most ways the NIFTY50 presents how well the Indian capital markets are doing. Downloading the DatasetIn this section of the Jupyter notebook we are going to download an interesting data set from kaggle dataset repositories. We are using python library called OpenDatasets for downloading from kaggle. While downloading we are asked for kaggle user id and API token key for accessing the dataset from kaggle. Kaggle is a platform used for obtaining datasets and various other datascience tasks.
!pip install jovian opendatasets --upgrade --quiet
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Let's begin by downloading the data, and listing the files within the dataset.
# Change this dataset_url = 'https://www.kaggle.com/rohanrao/nifty50-stock-market-data' import opendatasets as od od.download(dataset_url)
Skipping, found downloaded files in "./nifty50-stock-market-data" (use force=True to force download)
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
The dataset has been downloaded and extracted.
# Change this data_dir = './nifty50-stock-market-data' import os os.listdir(data_dir)
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Let us save and upload our work to Jovian before continuing.
project_name = "nifty50-stockmarket-data" # change this (use lowercase letters and hyphens only) !pip install jovian --upgrade -q import jovian jovian.commit(project=project_name)
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Data Preparation and CleaningData Preparation and Cleansing constitutes the first part of the Data Analysis project for any dataset. We do this process inorder to obtain retain valuable data from the data frame, one that is relevant for our analysis. The process is also used to remove erroneous values from the dataset(ex. NaN to 0). After the preparation of data and cleansing, the data can be used for analysis.In our dataframe we have a lot of non-releavant information, so we are going to drop few columns in the dataframe and fix some of the elements in data frame for better analysis. We are also going to change the Date column into DateTime format which can be further used to group the data by months/year.
import pandas as pd import numpy as np axis_df= pd.read_csv(data_dir + "/AXISBANK.csv") axis_df.info() axis_df.describe() axis_df axis_df['Symbol'] = np.where(axis_df['Symbol'] == 'UTIBANK', 'AXISBANK', axis_df['Symbol']) axis_df axis_new_df = axis_df.drop(['Last','Series', 'VWAP', 'Trades','Deliverable Volume','%Deliverble'], axis=1) axis_new_df def getIndexes(dfObj, value): ''' Get index positions of value in dataframe i.e. dfObj.''' listOfPos = list() # Get bool dataframe with True at positions where the given value exists result = dfObj.isin([value]) # Get list of columns that contains the value seriesObj = result.any() columnNames = list(seriesObj[seriesObj == True].index) # Iterate over list of columns and fetch the rows indexes where value exists for col in columnNames: rows = list(result[col][result[col] == True].index) for row in rows: listOfPos.append((row, col)) # Return a list of tuples indicating the positions of value in the dataframe return listOfPos listOfPosition_axis = getIndexes(axis_df, '2019-01-01') listOfPosition_axis axis_new_df.drop(axis_new_df.loc[0:4728].index, inplace = True) axis_new_df
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Summary of the operations done till now:1. we have taken a csv file containing stock data of AXIS BANK from the data set of nifty50 stocks and performed data cleansing operations on them.2. Originally, the data from the data set is noticed as stock price quotations from the year 2001 but for our analysis we have taken data for the years 2019-20213. Then we have dropped the columns that are not relevant for our analysis by using pandas dataframe operations.
axis_new_df.reset_index(drop=True, inplace=True) axis_new_df axis_new_df['Date'] = pd.to_datetime(axis_new_df['Date']) # we changed the Dates into Datetime format from the object format axis_new_df.info() axis_new_df['Daily Lag'] = axis_new_df['Close'].shift(1) # Added a new column Daily Lag to calculate daily returns of the stock axis_new_df['Daily Returns'] = (axis_new_df['Daily Lag']/axis_new_df['Close']) -1 axis_dailyret_df = axis_new_df.drop(['Prev Close', 'Open','High', 'Low','Close','Daily Lag'], axis = 1) axis_dailyret_df import jovian jovian.commit()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Exploratory Analysis and Visualization Here we compute the mean, max/min stock quotes of the stock AXISBANK. We specifically compute the mean of the Daily returns column. we are going to do the analysis by first converting the index datewise to month wise to have a good consolidated dataframe to analyze in broad timeline. we are going to divide the data frame into three for the years 2019, 2020, 2021 respectively, in order to analyze the yearly performance of the stock. Let's begin by importing`matplotlib.pyplot` and `seaborn`.
import seaborn as sns import matplotlib import matplotlib.pyplot as plt %matplotlib inline sns.set_style('darkgrid') matplotlib.rcParams['font.size'] = 10 matplotlib.rcParams['figure.figsize'] = (15, 5) matplotlib.rcParams['figure.facecolor'] = '#00000000'
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Here we are going to explore the daily Returns column by plotting a line graph of daily returns v/s Months. Now we can see that daily returns are growing across months in the years 2019-2021.
axis_dailyret_plot=axis_dailyret_df.groupby(axis_dailyret_df['Date'].dt.strftime('%B'))['Daily Returns'].sum().sort_values() plt.plot(axis_dailyret_plot) axis_new_df['Year'] = pd.DatetimeIndex(axis_new_df['Date']).year axis_new_df axis2019_df = axis_new_df[axis_new_df.Year == 2019 ] axis2020_df = axis_new_df[axis_new_df.Year == 2020 ] axis2021_df = axis_new_df[axis_new_df.Year == 2021 ] axis2019_df.reset_index(drop = True, inplace = True) axis2019_df axis2020_df.reset_index(drop = True, inplace = True) axis2020_df axis2021_df.reset_index(drop=True, inplace=True) axis2021_df
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Summary of above exploratory Analysis:In the above code cells, we performed plotting of the data by exploring a column from the data. We have divided the DataFrame into three data frames containing the stock quote data from year-wise i.e., for the years 2019, 2020, 2021. For dividing the DataFrame year-wise we have added a new column called 'Year' which is generated from the DataTime values of the column "Date".
axis_range_df = axis_dailyret_df['Daily Returns'].max() - axis_dailyret_df['Daily Returns'].min() axis_range_df axis_mean_df = axis_dailyret_df['Daily Returns'].mean() axis_mean_df
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
In the above two code cells, we have computed the range i.e. the difference between maximum and minimum value of the column. We have also calculated the mean of the daily returns of the Axis Bank stock. Exploratory Analysis of stock quotes year-wise for Axis Bank:In this section we have plotted the Closing values of the stock throughout the year for the years 2019,2020,2021. We have only partial data for 2021(i.e. till Apr 2021). We have also done a plot to compare the performance throughout the year for the years 2019 and 2020(since we had full data for the respective years).
plt.plot(axis2019_df['Date'],axis2019_df['Close'] ) plt.title('Closing Values of stock for the year 2019') plt.xlabel(None) plt.ylabel('Closing price of the stock') plt.plot(axis2020_df['Date'],axis2020_df['Close']) plt.title('Closing Values of stock for the year 2020') plt.xlabel(None) plt.ylabel('Closing price of the stock') plt.plot(axis2021_df['Date'],axis2021_df['Close']) plt.title('Closing Values of stock for the year 2021 Till April Month') plt.xlabel(None) plt.ylabel('Closing price of the stock')
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
**TODO** - Explore one or more columns by plotting a graph below, and add some explanation about it
plt.style.use('fivethirtyeight') plt.plot(axis2019_df['Date'], axis2019_df['Close'],linewidth=3, label = '2019') plt.plot(axis2020_df["Date"],axis2020_df['Close'],linewidth=3, label = '2020') plt.legend(loc='best' ) plt.title('Closing Values of stock for the years 2019 and 2020') plt.xlabel(None) plt.ylabel('Closing price of the stock') print(plt.style.available)
['Solarize_Light2', '_classic_test_patch', 'bmh', 'classic', 'dark_background', 'fast', 'fivethirtyeight', 'ggplot', 'grayscale', 'seaborn', 'seaborn-bright', 'seaborn-colorblind', 'seaborn-dark', 'seaborn-dark-palette', 'seaborn-darkgrid', 'seaborn-deep', 'seaborn-muted', 'seaborn-notebook', 'seaborn-paper', 'seaborn-pastel', 'seaborn-poster', 'seaborn-talk', 'seaborn-ticks', 'seaborn-white', 'seaborn-whitegrid', 'tableau-colorblind10']
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Let us save and upload our work to Jovian before continuing
import jovian jovian.commit()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Asking and Answering QuestionsIn this section, we are going to answer some of the questions regarding the dataset using various data analysis libraries like Numpy, Pandas, Matplotlib and seaborn. By using the tools we can see how useful the libraries come in handy while doing Inference on a dataset. > Instructions (delete this cell)>> - Ask at least 5 interesting questions about your dataset> - Answer the questions either by computing the results using Numpy/Pandas or by plotting graphs using Matplotlib/Seaborn> - Create new columns, merge multiple dataset and perform grouping/aggregation wherever necessary> - Wherever you're using a library function from Pandas/Numpy/Matplotlib etc. explain briefly what it does Q1: What was the change in price and volume of the stock traded overtime?
plt.plot(axis2019_df['Date'], axis2019_df['Close'],linewidth=3, label = '2019') plt.plot(axis2020_df["Date"],axis2020_df['Close'],linewidth=3, label = '2020') plt.plot(axis2021_df["Date"], axis2021_df['Close'],linewidth = 3, label = '2021') plt.legend(loc='best' ) plt.title('Closing Price of stock for the years 2019-2021(Till April)') plt.xlabel(None) plt.ylabel('Closing price of the stock') print('The Maximum closing price of the stock during 2019-2021 is',axis_new_df['Close'].max()) print('The Minimum closing price of the stock during 2019-2021 is',axis_new_df['Close'].min()) print('The Index for the Maximum closing price in the dataframe is',getIndexes(axis_new_df, axis_new_df['Close'].max())) print('The Index for the Minimum closing price in the dataframe is',getIndexes(axis_new_df, axis_new_df['Close'].min())) print(axis_new_df.iloc[104]) print(axis_new_df.iloc[303])
The Maximum closing price of the stock during 2019-2021 is 822.8 The Minimum closing price of the stock during 2019-2021 is 303.15 The Index for the Maximum closing price in the dataframe is [(105, 'Prev Close'), (104, 'Close'), (105, 'Daily Lag')] The Index for the Minimum closing price in the dataframe is [(304, 'Prev Close'), (303, 'Close'), (304, 'Daily Lag')] Date 2019-06-04 00:00:00 Symbol AXISBANK Prev Close 812.65 Open 807.55 High 827.75 Low 805.5 Close 822.8 Volume 9515354 Turnover 778700415970000.0 Daily Lag 812.65 Daily Returns -0.012336 Year 2019 Name: 104, dtype: object Date 2020-03-24 00:00:00 Symbol AXISBANK Prev Close 308.65 Open 331.95 High 337.5 Low 291.0 Close 303.15 Volume 50683611 Turnover 1578313503950000.0 Daily Lag 308.65 Daily Returns 0.018143 Year 2020 Name: 303, dtype: object
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
* As we can see from the above one of the two plots there was a dip in the closing price during the year 2020. The Maximum Closing price occurred on 2019-06-04(Close = 822.8). The lowest of closing price during the years occurred on 2020-03-24(Close = 303.15). This can say that the start of the pandemic has caused the steep down curve for the stock's closing price.
plt.plot(axis2019_df["Date"],axis2019_df["Volume"],linewidth=2, label = '2019') plt.plot(axis2020_df["Date"],axis2020_df["Volume"],linewidth=2, label = '2020') plt.plot(axis2021_df["Date"],axis2021_df["Volume"],linewidth=2, label = '2021') plt.legend(loc='best') plt.title('Volume of stock traded in the years 2019-2021(till April)') plt.ylabel('Volume') plt.xlabel(None) print('The Maximum volume of the stock traded during 2019-2021 is',axis_new_df['Volume'].max()) print('The Minimum volume of the stock traded during 2019-2021 is',axis_new_df['Volume'].min()) print('The Index for the Maximum volume stock traded in the dataframe is',getIndexes(axis_new_df, axis_new_df['Volume'].max())) print('The Index for the Minimum volume stock traded in the dataframe is',getIndexes(axis_new_df, axis_new_df['Volume'].min())) print(axis_new_df.iloc[357]) print(axis_new_df.iloc[200])
The Maximum volume of the stock traded during 2019-2021 is 96190274 The Minimum volume of the stock traded during 2019-2021 is 965772 The Index for the Maximum volume stock traded in the dataframe is [(357, 'Volume')] The Index for the Minimum volume stock traded in the dataframe is [(200, 'Volume')] Date 2020-06-16 00:00:00 Symbol AXISBANK Prev Close 389.6 Open 404.9 High 405.0 Low 360.4 Close 381.55 Volume 96190274 Turnover 3654065942305001.0 Daily Lag 389.6 Daily Returns 0.021098 Year 2020 Name: 357, dtype: object Date 2019-10-27 00:00:00 Symbol AXISBANK Prev Close 708.6 Open 711.0 High 715.05 Low 708.55 Close 710.1 Volume 965772 Turnover 68696126654999.992188 Daily Lag 708.6 Daily Returns -0.002112 Year 2019 Name: 200, dtype: object
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
As we can see from the above graph a lot of volume of trade happened during 2020. That means the stock was transacted a lot during the year 2020. The highest Volumed of stock is traded on 2020-06-16(Volume =96190274) and the Minimum volume of the stock traded during 2019-2021 is on 2019-10-27(Volume = 965772) Q2: What was the daily return of the stock on average?The daily return measures the price change in a stock's price as a percentage of the previous day's closing price. A positive return means the stock has grown in value, while a negative return means it has lost value. we will also attempt to calculate the maximum daily return of the stock during 2019-2021.
#axis_new_df['Daily Returns'].plot(title='Axis Bank Daily Returns') plt.plot(axis_new_df['Date'],axis_new_df['Daily Returns'], linewidth=2 ,label = 'Daily Returns') plt.legend(loc='best' ) plt.title('Daily Returns of stock for the years 2019-2021(Till April)') plt.xlabel(None) plt.ylabel('Daily Returns of the stock') plt.plot(axis_new_df['Date'],axis_new_df['Daily Returns'], linestyle='--', marker='o') plt.title('Daily Returns of stock for the years 2019-2021(Till April)') plt.xlabel(None) plt.ylabel('Daily Returns of the stock') print('The Maximum daily return during the years 2020 is',axis_new_df['Daily Returns'].max()) index = getIndexes(axis_new_df, axis_new_df['Daily Returns'].max()) axis_new_df.iloc[302] def getIndexes(dfObj, value): ''' Get index positions of value in dataframe i.e. dfObj.''' listOfPos = list() # Get bool dataframe with True at positions where the given value exists result = dfObj.isin([value]) # Get list of columns that contains the value seriesObj = result.any() columnNames = list(seriesObj[seriesObj == True].index) # Iterate over list of columns and fetch the rows indexes where value exists for col in columnNames: rows = list(result[col][result[col] == True].index) for row in rows: listOfPos.append((row, col)) # Return a list of tuples indicating the positions of value in the dataframe return listOfPos
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
As we can see from the plot there were high daily returns for the stock around late March 2020 and then there was ups and downs from April- July 2020 . we can see that the most changes in daily returns occurred during April 2020 - July 2020 and at other times the daily returns were almost flat. The maximum daily returns for the stock during 2019-2021 occurred on 2020-03-23(observed from the pandas table above).
Avgdailyret_2019 =axis2019_df['Daily Returns'].sum()/len(axis2019_df['Daily Returns']) Avgdailyret_2020 =axis2020_df['Daily Returns'].sum()/len(axis2020_df['Daily Returns']) Avgdailyret_2021 =axis2021_df['Daily Returns'].sum()/len(axis2021_df['Daily Returns']) # create a dataset data_dailyret = {'2019': Avgdailyret_2019, '2020':Avgdailyret_2020, '2021':Avgdailyret_2021} Years = list(data_dailyret.keys()) Avgdailyret = list(data_dailyret.values()) # plotting a bar chart plt.figure(figsize=(10, 7)) plt.bar(Years, Avgdailyret, color ='maroon',width = 0.3) plt.xlabel("Years") plt.ylabel("Average Daily Returns of the Stock Traded") plt.title("Average Daily Returns of the Stock over the years 2019-2021(Till April) (in 10^7)") plt.show() plt.figure(figsize=(12, 7)) sns.distplot(axis_new_df['Daily Returns'].dropna(), bins=100, color='purple') plt.title(' Histogram of Daily Returns') plt.tight_layout()
/opt/conda/lib/python3.9/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms). warnings.warn(msg, FutureWarning)
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Q3: What is the Average Trading volume of the stock for past three years?
Avgvol_2019 =axis2019_df['Volume'].sum()/len(axis2019_df['Volume']) Avgvol_2020 =axis2020_df['Volume'].sum()/len(axis2020_df['Volume']) Avgvol_2021 =axis2021_df['Volume'].sum()/len(axis2021_df['Volume']) # create a dataset data_volume = {'2019': Avgvol_2019, '2020':Avgvol_2020, '2021':Avgvol_2021} Years = list(data_volume.keys()) AvgVol = list(data_volume.values()) # plotting a bar chart plt.figure(figsize=(13, 7)) plt.bar(Years, AvgVol, color ='maroon',width = 0.3) plt.xlabel("Years") plt.ylabel("Average Volume of the Stock Traded") plt.title("Average Trading volume of the Stock over the years 2019-2021(Till April) (in 10^7)") plt.show()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
From the above plot we can say that more volume of the Axis Bank stock is traded during the year 2020. We can see a significant rise in the trading volume of the stock from 2019 to 2020. Q4: What is the Average Closing price of the stock for past three years?
Avgclose_2019 =axis2019_df['Close'].sum()/len(axis2019_df['Close']) Avgclose_2020 =axis2020_df['Close'].sum()/len(axis2020_df['Close']) Avgclose_2021 =axis2021_df['Close'].sum()/len(axis2021_df['Close']) # create a dataset data_volume = {'2019': Avgclose_2019, '2020':Avgclose_2020, '2021':Avgclose_2021} Years = list(data_volume.keys()) AvgClose = list(data_volume.values()) # plotting a bar chart plt.figure(figsize=(13, 7)) plt.bar(Years, AvgClose, color ='maroon',width = 0.3) plt.xlabel("Years") plt.ylabel("Average Closding Price of the Stock Traded") plt.title("Average Closing price of the Stock over the years 2019-2021(Till April) (in 10^7)") plt.show()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
We have seen the Trading Volume of the stock is more during the year 2020. In contrast, the Year 2020 has the lowest average closing price among the other two. But for the years 2019 and 2021 the Average closing price is almost same, there is not much change in the value. Let us save and upload our work to Jovian before continuing.
import jovian jovian.commit()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Inferences and ConclusionInferences : The above data analysis is done on the data set of stock quotes for AXIS BANK during the years 2019-2021. From the Analysis we can say that during the year 2020 there has been a lot of unsteady growth, there has been rise in the volume of stock traded on the exchange, that means there has been a lot of transactions of the stock. The stock has seen a swift traffic in buy/sell during the year 2020 and has fallen back to normal in the year 2021. In contrast to the volume of the stock the closing price of the stock has decreased during the year 2020, which can be concluded as the volume of the stock traded has no relation to the price change of the stock(while most people think there can be a correlation among the two values). The price decrease for the stock may have been due to the pandemic rise in India during the year 2020.
import jovian jovian.commit()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
References and Future WorkFuture Ideas for the Analyis:* I am planning to go forward with this basic Analysis of the AXISBANK stock quotes and build a Machine Learning model predicting the future stock prices.* I plan to automate the Data Analysis process for every stock in the NIFTY50 Index by defining reusable functions and automating the Analysis procedures.* Study more strong correlations between the different quotes of the stock and analyze how and why they are related in that fashion. REFRENCES/LINKS USED FOR THIS PROJECT :* https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html* https://stackoverflow.com/questions/16683701/in-pandas-how-to-get-the-index-of-a-known-value* https://towardsdatascience.com/working-with-datetime-in-pandas-dataframe-663f7af6c587* https://thispointer.com/python-find-indexes-of-an-element-in-pandas-dataframe/* https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.htmltimeseries-friendly-merging* https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html* https://towardsdatascience.com/financial-analytics-exploratory-data-analysis-of-stock-data-d98cbadf98b9* https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.transpose.html* https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.set_index.html* https://pandas.pydata.org/docs/reference/api/pandas.merge.html* https://stackoverflow.com/questions/14661701/how-to-drop-a-list-of-rows-from-pandas-dataframe* https://www.interviewqs.com/ddi-code-snippets/extract-month-year-pandas* https://stackoverflow.com/questions/18172851/deleting-dataframe-row-in-pandas-based-on-column-value* https://queirozf.com/entries/matplotlib-examples-displaying-and-configuring-legends* https://jakevdp.github.io/PythonDataScienceHandbook/04.06-customizing-legends.html* https://matplotlib.org/stable/tutorials/intermediate/legend_guide.html* https://matplotlib.org/devdocs/gallery/subplots_axes_and_figures/subplots_demo.html* https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots.html* https://stackoverflow.com/questions/332289/how-do-you-change-the-size-of-figures-drawn-with-matplotlib* https://www.investopedia.com/articles/investing/093014/stock-quotes-explained.asp* https://stackoverflow.com/questions/44908383/how-can-i-group-by-month-from-a-datefield-using-python-pandas* https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.hist.html* https://note.nkmk.me/en/python-pandas-dataframe-rename/* https://stackoverflow.com/questions/24748848/pandas-find-the-maximum-range-in-all-the-columns-of-dataframe* https://stackoverflow.com/questions/29233283/plotting-multiple-lines-in-different-colors-with-pandas-dataframe* https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html* https://www.geeksforgeeks.org/python-pandas-extracting-rows-using-loc/
import jovian jovian.commit()
_____no_output_____
Apache-2.0
_notebooks/2022-02-04-data-analysis-course-project.ipynb
sandeshkatakam/My-Machine_learning-Blog
Array Interview Question Anagram Checkanagram是一種字的轉換,使用相同的字母以任意順序重新組成不同的字,之中有任意空白都可以例如, "apple" -> "ap e lp"
def anagram(s1, s2): l_bound = ord('0') r_bound = ord('z') appeared = [0]*(r_bound - l_bound) for letter in s1: if letter != ' ': mapping = ord(letter) - l_bound appeared[mapping] += 1 for letter in s2: if letter != ' ': mapping = ord(letter) - l_bound appeared[mapping] -= 1 if appeared[mapping] < 0: return False for ele in appeared: if ele != 0: return False return True import unittest class TestAnagram(unittest.TestCase): def test(self, solve): self.assertEqual(solve('go go go','gggooo'), True) self.assertEqual(solve('abc','cba'), True) self.assertEqual(solve('hi man','hi man'), True) self.assertEqual(solve('aabbcc','aabbc'), False) self.assertEqual(solve('123','1 2'), False) print('success') t = TestAnagram('test') # need to provide the method name, default is runTest t.test(anagram)
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
個人這邊這解法可能會不夠完善,因為僅僅是針對魚數字字母的陣列mapping,但是萬一有符號就不知道要怎辦了,所以當然是可以用dict來解掉這煩人的問題拉,只是想說這是屬於array類別的問題,就故意只用array解 Array Pair Sum給予一個數字陣列,找出所有特定的數字配對的加起來為特定值kex.```pythonpair_sum([1,3,2,2], 4)(1,3)(2,2)今天是要回傳有幾個配對就好,所以是回傳數字2```
def pair_sum(arr,k): res = [False]*len(arr) for i in range(len(arr)-1): for j in range(i+1,len(arr)): if arr[i] + arr[j] == k: res[i] = True res[j] = True pair_count = [1 for ele in res if ele == True] return len(pair_count)//2
_____no_output_____
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
上面效率會是$ Big O(n^2) $,但是如果可以使用dict或是set的話,就可以把效率壓到 $ BigO(n) $,因為 `n in dict` 這樣的查找只需 $ BigO(1) $,在array找尋你要的值是要花費 $ BigO(n) $,下面我們就來換成用set or dict來實作
def pair_sum_set_version(arr, k): to_seek = set() output = set() for num in arr: target = k - num if target not in to_seek: to_seek.add(num) else: output.add((min(num, target), max(num, target))) return len(output) class TestPairSum(unittest.TestCase): def test(self, solve): self.assertEqual(solve([1,9,2,8,3,7,4,6,5,5,13,14,11,13,-1],10),6) self.assertEqual(solve([1,2,3,1],3),1) self.assertEqual(solve([1,3,2,2],4),2) print('success') t = TestPairSum() t.test(pair_sum_set_version)
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
finding missing element這題是會給予你兩個array,第二個array是從第一個array隨機刪除一個元素後,並且進行洗亂的動作,然後今天你的任務就是要去找那個消失的元素
def finder(ary, ary2): table = {} for ele in ary: if ele in table: table[ele] += 1 else: table[ele] = 1 for ele in ary2: if ele in table: table[ele] -= 1 else: return ele for k, v in table.items(): if v != 0: return k
_____no_output_____
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
上面這個邏輯,如果是先用ary2去做表紀錄的話邏輯上會更加簡潔,也會少了最後一步```pythonfor ele in ary2: table[ele] = 1for ele in ary1: if (ele not in table) or (table[ele] == 0): return ele else: table[ele] -= 1```這個解法算是最快的,因為如果使用排序的話最少都會要 $ n \log n $,排序就是loop他去找不一樣的元素而已。另外有個天殺的聰明解法,這我真的沒想到就是使用XOR,讓我們先來看看codexor ( exclude or ) 具有排他性的or,就是or只要兩者之一有true結果就會是true,但是兩個都是true對於程式會是一種ambiguous,因此exclude這種情況,所以xor就是one or the other but not both$ A \vee B $ but not $ A \wedge B $直接從語意上翻譯成數學就是像下面$$ A \oplus B = (A \vee B) \wedge \neg ( A \wedge B) $$總之呢! 因為xor的特性,若是兩個完全一樣的ary,你將會發現最後結果會是0```pythondef finder_xor(arr1, arr2): result=0 Perform an XOR between the numbers in the arrays for num in arr1+arr2: result^=num print result return result ```
class TestFinder(unittest.TestCase): def test(self, solve): self.assertEqual(solve([5,5,7,7],[5,7,7]),5) self.assertEqual(solve([1,2,3,4,5,6,7],[3,7,2,1,4,6]),5) self.assertEqual(solve([9,8,7,6,5,4,3,2,1],[9,8,7,5,4,3,2,1]),6) print('success') t = TestFinder() t.test(finder)
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
largest continuous sum題目會給予你一個陣列,你的任務就是要去從裡面發現哪種連續數字的總和會是最大值,不一定是全部數字加起來是最大,因為裡面會有負數,有可能是從某某位置開始的連續X個數子總和才是最大。
def lar_con_sum(ary): if len(ary) == 0: return 0 max_sum = cur_sum = ary[0] for num in ary[1:]: cur_sum = max(cur_sum+num, num) max_sum = max(cur_sum, max_sum) return max_sum
_____no_output_____
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
這題的思緒是,長度n的連續數字最大和,一定是從長度n-1連續數字最大和來的所以今天從index=0時來看,因為元素只有一個這時候就是他本身為最大值,當index=1時,我們就要來比較ele[0]+ele[1]和ele[0] <- 當前最大值的比較,比較這兩者然後取最大的,需要注意的是,我們需要暫存目前的sum,因為這是拿來判斷後面遇到負數狀時況,計算另一個最大值的點,此時另一個最大值(cur_sum)仍然會與之前最大值去比較(max_sum),
class TestLargestConSum(unittest.TestCase): def test(self, solve): self.assertEqual(solve([1,2,-1,3,4,-1]),9) self.assertEqual(solve([1,2,-1,3,4,10,10,-10,-1]),29) self.assertEqual(solve([-1,1]),1) self.assertEqual(solve([1,2,-10,5,6]), 11) print('success') t = TestLargestConSum() t.test(lar_con_sum)
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
Sentence Reversal給予一個字串,然後反轉單字順序,例如: 'here it is' -> 'is it here'
def sentenceReversal(str1): str1 = str1.strip() words = str1.split() result = '' for i in range(len(words)): result += ' '+words[len(words)-i-1] return result.strip() class TestSentenceReversal(unittest.TestCase): def test(self, solve): self.assertEqual(solve(' space before'),'before space') self.assertEqual(solve('space after '),'after space') self.assertEqual(solve(' Hello John how are you '),'you are how John Hello') self.assertEqual(solve('1'),'1') print('success') t = TestSentenceReversal() t.test(sentenceReversal)
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
值得注意的是python string split這個方法,不帶參數的話,預設是做strip的事然後分割,跟你使用 split(' ')得到的結果會不一樣,另外面試時可能要使用比較基本的方式來實作這題,也就是少用python trick的方式。 string compression給予一串字串,轉換成數字加字母的標記法,雖然覺得這個壓縮怪怪的,因為無法保留字母順序
def compression(str1): mapping = {} letter_order = [False] result = '' for ele in str1: if ele != letter_order[-1]: letter_order.append(ele) if ele not in mapping: mapping[ele] = 1 else: mapping[ele] += 1 for key in letter_order[1:]: result += '{}{}'.format(key, mapping[key]) return result class TestCompression(unittest.TestCase): def test(self, solve): self.assertEqual(solve(''), '') self.assertEqual(solve('AABBCC'), 'A2B2C2') self.assertEqual(solve('AAABCCDDDDD'), 'A3B1C2D5') print('success') t = TestCompression() t.test(compression)
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
unique characters in string給予一串字串並判斷他是否全部不同的字母
def uni_char(str1): mapping = {} for letter in str1: if letter in mapping: return False else: mapping[letter] = True return True def uni_char2(str1): return len(set(str1)) == len(str1) class TestUniChar(unittest.TestCase): def test(self, solve): self.assertEqual(solve(''), True) self.assertEqual(solve('goo'), False) self.assertEqual(solve('abcdefg'), True) print('success') t = TestUniChar() t.test(uni_char2)
success
MIT
Array Interview Question.ipynb
sillygod/ds_and_algorithm
Multi-Layer Perceptron, MNIST---In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.The process will be broken down into the following steps:>1. Load and visualize the data2. Define a neural network3. Train the model4. Evaluate the performance of our trained model on a test dataset!Before we begin, we have to import the necessary libraries for working with data and PyTorch.
# import libraries import torch import numpy as np
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
--- Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.This cell will create DataLoaders for each of our datasets.
from torchvision import datasets import torchvision.transforms as transforms # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # convert data to torch.FloatTensor transform = transforms.ToTensor() # choose the training and test datasets train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform) # prepare data loaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
Visualize a Batch of Training DataThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') # print out the correct label for each image # .item() gets the value contained in a Tensor ax.set_title(str(labels[idx].item()))
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
View an Image in More Detail
img = np.squeeze(images[1]) fig = plt.figure(figsize = (12,12)) ax = fig.add_subplot(111) ax.imshow(img, cmap='gray') width, height = img.shape thresh = img.max()/2.5 for x in range(width): for y in range(height): val = round(img[x][y],2) if img[x][y] !=0 else 0 ax.annotate(str(val), xy=(y,x), horizontalalignment='center', verticalalignment='center', color='white' if img[x][y]<thresh else 'black')
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.
import torch.nn as nn import torch.nn.functional as F ## TODO: Define the NN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # linear layer (784 -> 1 hidden node) self.fc1 = nn.Linear(28 * 28, 1) def forward(self, x): # flatten image input x = x.view(-1, 28 * 28) # add hidden layer, with relu activation function x = F.relu(self.fc1(x)) return x # initialize the NN model = Net() print(model)
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.
## TODO: Specify loss and optimization functions # specify loss function criterion = None # specify optimizer optimizer = None
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
--- Train the NetworkThe steps for training/learning from a batch of data are described in the comments below:1. Clear the gradients of all optimized variables2. Forward pass: compute predicted outputs by passing inputs to the model3. Calculate the loss4. Backward pass: compute gradient of the loss with respect to model parameters5. Perform a single optimization step (parameter update)6. Update average training lossThe following loop trains for 30 epochs; feel free to change this number. For now, we suggest somewhere between 20-50 epochs. As you train, take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.
# number of epochs to train the model n_epochs = 30 # suggest training between 20-50 epochs model.train() # prep model for training for epoch in range(n_epochs): # monitor training loss train_loss = 0.0 ################### # train the model # ################### for data, target in train_loader: # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update running training loss train_loss += loss.item()*data.size(0) # print training statistics # calculate average loss over an epoch train_loss = train_loss/len(train_loader.sampler) print('Epoch: {} \tTraining Loss: {:.6f}'.format( epoch+1, train_loss ))
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
--- Test the Trained NetworkFinally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy. `model.eval()``model.eval(`) will set all the layers in your model to evaluation mode. This affects layers like dropout layers that turn "off" nodes during training with some probability, but should allow every node to be "on" for evaluation!
# initialize lists to monitor test loss and accuracy test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # prep model for *evaluation* for data, target in test_loader: # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # update test loss test_loss += loss.item()*data.size(0) # convert output probabilities to predicted class _, pred = torch.max(output, 1) # compare predictions to true label correct = np.squeeze(pred.eq(target.data.view_as(pred))) # calculate test accuracy for each object class for i in range(len(target)): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 # calculate and print avg test loss test_loss = test_loss/len(test_loader.sampler) print('Test Loss: {:.6f}\n'.format(test_loss)) for i in range(10): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( str(i), 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i])) print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % ( 100. * np.sum(class_correct) / np.sum(class_total), np.sum(class_correct), np.sum(class_total)))
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
Visualize Sample Test ResultsThis cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds = torch.max(output, 1) # prep images for display images = images.numpy() # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())), color=("green" if preds[idx]==labels[idx] else "red"))
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb
armhzjz/deep-learning-v2-pytorch
Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func # create engine to hawaii.sqlite engine = create_engine('sqlite:///Resources/hawaii.sqlite') # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine, reflect=True) # View all of the classes that automap found Base.classes.keys() # Save references to each table Measurement = Base.classes.measurement Station = Base.classes.station # Create our session (link) from Python to the DB session = Session(engine)
_____no_output_____
ADSL
climate_starter.ipynb
nebiatabuhay/sqlalchemy-challenge
Exploratory Precipitation Analysis
# Find the most recent date in the data set. max_date = session.query(func.max(func.strftime("%Y-%m-%d", Measurement.date))).limit(5).all() max_date[0][0] # Design a query to retrieve the last 12 months of precipitation data and plot the results. # Starting from the most recent data point in the database. # Calculate the date one year from the last date in data set. # Perform a query to retrieve the data and precipitation scores # Save the query results as a Pandas DataFrame and set the index to the date column # Sort the dataframe by date # Use Pandas Plotting with Matplotlib to plot the data precipitation_data = session.query(func.strftime("%Y-%m-%d", Measurement.date), Measurement.prcp).\ filter(func.strftime("%Y-%m-%d", Measurement.date) >= dt.date(2016, 8, 23)).all() # Save the query results as a Pandas DataFrame and set the index to the date column precipitation_df = pd.DataFrame(precipitation_data, columns = ['date', 'precipitation']) #set index precipitation_df.set_index('date', inplace = True) precipitation_df = precipitation_df.sort_values(by='date') precipitation_df.head() fig, ax = plt.subplots(figsize = (20, 10)) precipitation_df.plot(ax = ax, x_compat = True) #title and labels ax.set_xlabel('Date') ax.set_ylabel('Precipitation (in.)') ax.set_title("Year Long Precipitation") plt.savefig("Images/precipitation.png") #plot plt.tight_layout() plt.show() # Use Pandas to calcualte the summary statistics for the precipitation data precipitation_df.describe()
_____no_output_____
ADSL
climate_starter.ipynb
nebiatabuhay/sqlalchemy-challenge
Exploratory Station Analysis
# Design a query to calculate the total number stations in the dataset stations = session.query(Station.id).distinct().count() stations # Design a query to find the most active stations (i.e. what stations have the most rows?) # List the stations and the counts in descending order. station_counts = (session.query(Measurement.station, func.count(Measurement.station)) .group_by(Measurement.station) .order_by(func.count(Measurement.station).desc()) .all()) station_counts # Using the most active station id from the previous query, calculate the lowest, highest, and average temperature. most_active_station = 'USC00519281' temps = session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).\ filter(Measurement.station == most_active_station).all() temps # Using the most active station id # Query the last 12 months of temperature observation data for this station and plot the results as a histogram temp_observation = session.query(Measurement.date, Measurement.tobs).filter(Measurement.station == most_active_station).\ filter(func.strftime("%Y-%m-%d", Measurement.date) >= dt.date(2016, 8, 23)).all() #save as a data frame temp_observation_df = pd.DataFrame(temp_observation, columns = ['date', 'temperature']) fig, ax = plt.subplots() temp_observation_df.plot.hist(bins = 12, ax = ax) #labels ax.set_xlabel('Temperature') ax.set_ylabel('Frequency') #save figure plt.savefig("Images/yearly_plot.png") #plot plt.tight_layout() plt.show()
_____no_output_____
ADSL
climate_starter.ipynb
nebiatabuhay/sqlalchemy-challenge
Close session
# Close Session session.close()
_____no_output_____
ADSL
climate_starter.ipynb
nebiatabuhay/sqlalchemy-challenge
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved. # # Licensed under the MIT License. You may not use this file except in compliance # with the License. Use and/or modification of this code outside of 6.S191 must # reference: # # © MIT 6.S191: Introduction to Deep Learning # http://introtodeeplearning.com #
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
# Import Tensorflow 2.0 #%tensorflow_version 2.x import tensorflow as tf #!pip install mitdeeplearning import mitdeeplearning as mdl import matplotlib.pyplot as plt import numpy as np import random from tqdm import tqdm # Check that we are using a GPU, if not switch runtimes # using Runtime > Change Runtime Type > GPU assert len(tf.config.list_physical_devices('GPU')) > 0
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
mnist = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32) train_labels = (train_labels).astype(np.int64) test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32) test_labels = (test_labels).astype(np.int64)
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
plt.figure(figsize=(10,10)) random_inds = np.random.choice(60000,36) for i in range(36): plt.subplot(6,6,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) image_ind = random_inds[i] plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary) plt.xlabel(train_labels[image_ind])
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
def build_fc_model(): fc_model = tf.keras.Sequential([ # First define a Flatten layer tf.keras.layers.Flatten(), # '''TODO: Define the activation function for the first fully connected (Dense) layer.''' tf.keras.layers.Dense(128, activation=tf.nn.relu), # '''TODO: Define the second Dense layer to output the classification probabilities''' #'''TODO: Dense layer to output classification probabilities''' tf.keras.layers.Dense(128, activation=tf.nn.softmax) ]) return fc_model model = build_fc_model()
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
'''TODO: Experiment with different optimizers and learning rates. How do these affect the accuracy of the trained model? Which optimizers and/or learning rates yield the best performance?''' model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
# Define the batch size and the number of epochs to use during training BATCH_SIZE = 64 EPOCHS = 5 model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
Epoch 1/5 938/938 [==============================] - 5s 6ms/step - loss: 0.4299 - accuracy: 0.8817 Epoch 2/5 938/938 [==============================] - 5s 5ms/step - loss: 0.2194 - accuracy: 0.9376 Epoch 3/5 938/938 [==============================] - 5s 5ms/step - loss: 0.1639 - accuracy: 0.9537 Epoch 4/5 938/938 [==============================] - 5s 5ms/step - loss: 0.1322 - accuracy: 0.9625 Epoch 5/5 938/938 [==============================] - 5s 5ms/step - loss: 0.1107 - accuracy: 0.9682
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
'''TODO: Use the evaluate method to test the model!''' test_loss, test_acc = model.evaluate( x=test_images, y=test_labels, batch_size=BATCH_SIZE)#, #verbose=1, #sample_weight=None, #steps=None, #callbacks=None, #max_queue_size=10, #workers=1, #use_multiprocessing=False, #return_dict=False, #**kwargs #) print('Test accuracy:', test_acc)
157/157 [==============================] - 1s 5ms/step - loss: 0.1066 - accuracy: 0.9694 Test accuracy: 0.9693999886512756
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
def build_cnn_model(): cnn_model = tf.keras.Sequential([ # TODO: Define the first convolutional layer tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu), # TODO: Define the first max pooling layer ##tf.keras.layers.MaxPool2D('''TODO'''), tf.keras.layers.MaxPooling2D(pool_size=(2, 2)), # TODO: Define the second convolutional layer ##tf.keras.layers.Conv2D('''TODO'''), tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation=tf.nn.relu), # TODO: Define the second max pooling layer ##tf.keras.layers.MaxPool2D('''TODO'''), tf.keras.layers.MaxPooling2D(pool_size=(2, 2)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation=tf.nn.relu), # TODO: Define the last Dense layer to output the classification # probabilities. Pay attention to the activation needed a probability # output #'''TODO: Dense layer to output classification probabilities''' tf.keras.layers.Dense(10, activation=tf.keras.activations.softmax) ]) return cnn_model cnn_model = build_cnn_model() # Initialize the model by passing some data through cnn_model.predict(train_images[[0]]) # Print the summary of the layers in the model. print(cnn_model.summary())
2022-03-28 14:34:43.418149: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8303
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
'''TODO: Define the compile operation with your optimizer and learning rate of choice''' cnn_model.compile(optimizer=tf.keras.optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.''' cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
Epoch 1/5 938/938 [==============================] - 7s 7ms/step - loss: 0.1806 - accuracy: 0.9467 Epoch 2/5 938/938 [==============================] - 6s 7ms/step - loss: 0.0578 - accuracy: 0.9819 Epoch 3/5 938/938 [==============================] - 6s 7ms/step - loss: 0.0395 - accuracy: 0.9878 Epoch 4/5 938/938 [==============================] - 7s 7ms/step - loss: 0.0300 - accuracy: 0.9906 Epoch 5/5 938/938 [==============================] - 7s 7ms/step - loss: 0.0232 - accuracy: 0.9924
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
'''TODO: Use the evaluate method to test the model!''' test_loss, test_acc = model.evaluate( x=test_images, y=test_labels, batch_size=BATCH_SIZE) print('Test accuracy:', test_acc)
157/157 [==============================] - 1s 5ms/step - loss: 0.1066 - accuracy: 0.9694 Test accuracy: 0.9693999886512756
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
predictions = cnn_model.predict(test_images)
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
predictions[0]
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
'''TODO: identify the digit with the highest confidence prediction for the first image in the test dataset. ''' prediction = np.argmax(predictions[0]) print(prediction)
7
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
print("Label of this digit is:", test_labels[0]) plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
Label of this digit is: 7
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
#@title Change the slider to look at the model's predictions! { run: "auto" } image_index = 79 #@param {type:"slider", min:0, max:100, step:1} plt.subplot(1,2,1) mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images) plt.subplot(1,2,2) mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
# Plots the first X test images, their predicted label, and the true label # Color correct predictions in blue, incorrect predictions in red num_rows = 5 num_cols = 4 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) mdl.lab2.plot_value_prediction(i, predictions, test_labels)
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
# Rebuild the CNN model cnn_model = build_cnn_model() batch_size = 12 loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy') optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists for idx in tqdm(range(0, train_images.shape[0], batch_size)): # First grab a batch of training data and convert the input images to tensors (images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size]) images = tf.convert_to_tensor(images, dtype=tf.float32) # GradientTape to record differentiation operations with tf.GradientTape() as tape: #'''TODO: feed the images into the model and obtain the predictions''' logits = # TODO #'''TODO: compute the categorical cross entropy loss loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record plotter.plot(loss_history.get()) # Backpropagation '''TODO: Use the tape to compute the gradient against all parameters in the CNN model. Use cnn_model.trainable_variables to access these parameters.''' grads = # TODO optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
_____no_output_____
MIT
lab2/Part1_MNIST.ipynb
AnthonyLapadula/introtodeeplearning
Sketch of UWB pipelineThis notebook contains the original sketch of the uwb implementation which is availible in the uwb package.Code in the package is mostly reworked and devide in modules. For usage of the package please check outthe other notebook in the directory.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs from sklearn.cluster import DBSCAN from itertools import product from scipy.stats import multivariate_normal from functools import reduce def multi_dim_noise(grid_dims, amount, step, std=10, means=(1,5)): prod = reduce((lambda x,y: x*y), grid_dims) samples = np.zeros(grid_dims + [amount , len(grid_dims)]) clusters = np.random.randint( means[0], means[1] + 1, size=grid_dims ) grid = [] for dim in grid_dims: grid.append(((np.arange(dim) + 1) * step)) mean = np.array(np.meshgrid(*grid, indexing="ij")).reshape(prod, len(grid_dims)) noise = np.random.randn(means[1], prod, len(grid_dims)) * std centers = (noise + mean).reshape([means[1]] + grid_dims + [len(grid_dims)]) # transpose hack for selection roll_idx = np.roll(np.arange(centers.ndim),-1).tolist() centers = np.transpose(centers, roll_idx) for idxs in product(*[range(i) for i in grid_dims]): print(idxs) samples[idxs] = make_blobs( n_samples=amount, centers=(centers[idxs][:, 0:clusters[idxs]]).T )[0] return samples def generate_noise(width, length, amount, step, std=10, means=(1,5)): samples = np.zeros((width, length, amount, 2)) clusters = np.random.randint( means[0], means[1] + 1, size=(width, length) ) # calculate centers grid_width = (np.arange(width) + 1) * step grid_length = (np.arange(length) + 1) * step mean = np.array( [ np.repeat(grid_width, len(grid_length)), np.tile(grid_length, len(grid_width)), ] ).T noise = np.random.randn(means[1], width * length, 2) * std centers = (noise + mean).reshape((means[1], width, length, 2)) for i in range(width): for j in range(length): samples[i, j, :] = make_blobs( n_samples=amount, centers=centers[0 : clusters[i, j], i, j, :] )[0] return samples, (grid_width, grid_length) np.random.seed(0) data, map_grid = generate_noise(3, 3, 50, 10) multi_dim_noise([4,2,5], 50, 10) plt.plot(data[0,0,:,0], data[0,0,:,1], 'o') # example of 5 clusters in position 0,0 plt.show() def generate_map(noise, eps=2, min_samples=3): db = DBSCAN(eps=eps, min_samples=min_samples).fit(noise) core_samples_mask = np.zeros_like(db.labels_, dtype=bool) core_samples_mask[db.core_sample_indices_] = True labels = db.labels_ # Number of clusters in labels, ignoring noise if present. n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0) n_noise_ = list(labels).count(-1) return labels, core_samples_mask, n_clusters_ def plot_clusters(X, labels, core_sapmles_mask, n_clusters_): unique_labels = set(labels) colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels))] for k, col in zip(unique_labels, colors): if k == -1: # Black used for noise. col = [0, 0, 0, 1] class_member_mask = (labels == k) xy = X[class_member_mask & core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=14) xy = X[class_member_mask & ~core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=6) plt.title('Estimated number of clusters: %d' % n_clusters_) labels = np.zeros((3, 3, 50), dtype=int) for x,y in product(range(3), range(3)): labels[x,y,:], core_samples_mask, n_clusters_ = generate_map(data[x,y,:,:]) plot_clusters(data[x,y,:,:], labels[x,y,:], core_samples_mask, n_clusters_) plt.show() # estimate parameters # this is quite slow but calculation is perfomed only once per map generation params = [[[] for i in range(3)] for i in range(3)] for x,y in product(range(3), range(3)): used_data = 50 - list(labels[x,y]).count(-1) for i in range(np.max(labels[x,y,:]) + 1): mask = labels[x,y] == i mean_noise = data[x,y,mask,:].mean(axis=0) - np.array([(x+1) * 10,(y+1) * 10]) cov_noise = np.cov(data[x,y,mask,:].T) weight = mask.sum() / used_data params[x][y].append((mean_noise, cov_noise, weight)) print(params) # dynamics model walk = [] start_state = np.array([[20, 20, 0, 0]], dtype=float) walk.append(start_state) def transition_function(current_state, x_range=(10, 40), y_range=(10, 40), std=1): """Performs a one step transition assuming sensing interval of one Format of current_state = [x,y,x',y'] + first dimension is batch size """ next_state = np.copy(current_state) next_state[: ,0:2] += current_state[:, 2:4] next_state[: ,2:4] += np.random.randn(2) * std next_state[: ,0] = np.clip(next_state[: ,0], x_range[0], x_range[1]) next_state[: ,1] = np.clip(next_state[: ,1], y_range[0], y_range[1]) return next_state next_state = transition_function(start_state) walk.append(next_state) for i in range(100): next_state = transition_function(next_state) walk.append(next_state) walk = np.array(walk) print(walk.shape) plt.plot(walk[:,0,0], walk[:,0, 1]) plt.show() # measurement noise map augmented particle filter def find_nearest_map_position(x,y, map_grid): x_pos = np.searchsorted(map_grid[0], x) y_pos = np.searchsorted(map_grid[1], y, side="right") x_valid = (x_pos != 0) & (x_pos < len(map_grid[0])) x_pos = np.clip(x_pos, 0, len(map_grid[0]) - 1) x_dist_right = map_grid[0][x_pos] - x x_dist_left = x - map_grid[0][x_pos - 1] x_pos[x_valid & (x_dist_right > x_dist_left)] -= 1 y_valid = (y_pos != 0) & (y_pos < len(map_grid[1])) y_pos = np.clip(y_pos, 0, len(map_grid[1]) - 1) y_dist_right = map_grid[1][y_pos] - y y_dist_left = y - map_grid[0][y_pos - 1] y_pos[y_valid & (y_dist_right > y_dist_left)] -= 1 return x_pos, y_pos def reweight_samples(x, z, w, params, map_grip): x_pos, y_pos = find_nearest_map_position(x[:,0], x[:,1], map_grid) new_weights = np.zeros_like(w) for i, (x_p, y_p) in enumerate(zip(x_pos, y_pos)): for gm in params[x_p][y_p]: # calculating p(z|x) for GM mean, cov, weight = gm new_weights[i] += multivariate_normal.pdf(z[i, 0:2] ,mean=mean, cov=cov) * weight * w[i] denorm = np.sum(new_weights) return new_weights / denorm print(map_grid) x = np.array([9, 10, 11, 14, 16, 24, 31, 30, 29, 15]) y = np.array([9, 10, 11, 14, 16, 24, 31, 30, 29, 15]) w = np.ones(10) * 0.1 print(find_nearest_map_position( x, y, map_grid )) x_noise = np.random.randn(10) y_noise = np.random.randn(10) particles = np.stack((x, y, x_noise, y_noise)).T transitioned_particles = transition_function(particles) n_w = reweight_samples(particles, transitioned_particles, w, params, map_grid) # compute metrics for resampling def compute_ESS(x, w): M = len(x) CV = 1/M * np.sum((w*M-1)**2) return M / (1 + CV) print(compute_ESS(particles, w)) print(compute_ESS(particles, n_w)) # needs to be resampled
_____no_output_____
MIT
notebooks/noise_map_generator_example.py.ipynb
freiberg-roman/uwb-proto
Multiple-criteria Analysis
from dpd.mca import MultipleCriteriaAnalysis from dpd.d3 import radar_chart from IPython.core.display import HTML attributes = ["Cost", "Time", "Comfort"] alternatives = ["Tram", "Bus"] mca = MultipleCriteriaAnalysis(attributes, alternatives) mca.mca["Tram"]["Cost"] = 200 mca.mca["Bus"]["Cost"] = 100 mca.mca["Tram"]["Time"] = 50 mca.mca["Bus"]["Time"] = 100 mca.mca["Tram"]["Comfort"] = 800 mca.mca["Bus"]["Comfort"] = 500 mca.mca legend_options, d, title = mca.to_d3_radar_chart() HTML(radar_chart(legend_options, d, title))
_____no_output_____
MIT
docs/notebooks/mca.ipynb
davidbailey/dpd
Example of TreeMix NOTE : This page was originally used by HuggingFace to illustrate the summary of various tasks ([original page](https://colab.research.google.com/github/huggingface/notebooks/blob/master/transformers_doc/pytorch/task_summary.ipynbscrollTo=XJEVX6F9rQdI)), we use it to show the examples we illustrate in our paper. We follow the orginal settings and just change the sentence in to predict. This is a sequence classification model trained on full SST2 datasets.
# Transformers installation ! pip install transformers datasets # To install from source instead of the last release, comment the command above and uncomment the following one. # ! pip install git+https://github.com/huggingface/transformers.git
Collecting transformers Downloading transformers-4.12.2-py3-none-any.whl (3.1 MB)  |████████████████████████████████| 3.1 MB 4.9 MB/s [?25hCollecting datasets Downloading datasets-1.14.0-py3-none-any.whl (290 kB)  |████████████████████████████████| 290 kB 42.2 MB/s [?25hRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.3.0) Collecting huggingface-hub>=0.0.17 Downloading huggingface_hub-0.0.19-py3-none-any.whl (56 kB)  |████████████████████████████████| 56 kB 4.1 MB/s [?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0) Collecting sacremoses Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)  |████████████████████████████████| 895 kB 56.3 MB/s [?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5) Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.8.1) Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (21.0) Collecting pyyaml>=5.1 Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)  |████████████████████████████████| 596 kB 57.1 MB/s [?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.62.3) Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20) Collecting tokenizers<0.11,>=0.10.1 Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)  |████████████████████████████████| 3.3 MB 30.8 MB/s [?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from huggingface-hub>=0.0.17->transformers) (3.7.4.3) Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers) (2.4.7) Collecting xxhash Downloading xxhash-2.0.2-cp37-cp37m-manylinux2010_x86_64.whl (243 kB)  |████████████████████████████████| 243 kB 57.5 MB/s [?25hRequirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from datasets) (0.3.4) Requirement already satisfied: multiprocess in /usr/local/lib/python3.7/dist-packages (from datasets) (0.70.12.2) Collecting aiohttp Downloading aiohttp-3.8.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)  |████████████████████████████████| 1.1 MB 57.7 MB/s [?25hCollecting fsspec[http]>=2021.05.0 Downloading fsspec-2021.10.1-py3-none-any.whl (125 kB)  |████████████████████████████████| 125 kB 60.6 MB/s [?25hRequirement already satisfied: pyarrow!=4.0.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from datasets) (3.0.0) Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from datasets) (1.1.5) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2021.5.30) Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (2.0.7) Collecting aiosignal>=1.1.2 Downloading aiosignal-1.2.0-py3-none-any.whl (8.2 kB) Collecting yarl<2.0,>=1.0 Downloading yarl-1.7.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (271 kB)  |████████████████████████████████| 271 kB 58.0 MB/s [?25hCollecting asynctest==0.13.0 Downloading asynctest-0.13.0-py3-none-any.whl (26 kB) Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (21.2.0) Collecting frozenlist>=1.1.1 Downloading frozenlist-1.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (192 kB)  |████████████████████████████████| 192 kB 57.0 MB/s [?25hCollecting multidict<7.0,>=4.5 Downloading multidict-5.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (160 kB)  |████████████████████████████████| 160 kB 57.1 MB/s [?25hCollecting async-timeout<5.0,>=4.0.0a3 Downloading async_timeout-4.0.0a3-py3-none-any.whl (9.5 kB) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.6.0) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2.8.2) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2018.9) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->datasets) (1.15.0) Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2) Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.0.1) Installing collected packages: multidict, frozenlist, yarl, asynctest, async-timeout, aiosignal, pyyaml, fsspec, aiohttp, xxhash, tokenizers, sacremoses, huggingface-hub, transformers, datasets Attempting uninstall: pyyaml Found existing installation: PyYAML 3.13 Uninstalling PyYAML-3.13: Successfully uninstalled PyYAML-3.13 Successfully installed aiohttp-3.8.0 aiosignal-1.2.0 async-timeout-4.0.0a3 asynctest-0.13.0 datasets-1.14.0 frozenlist-1.2.0 fsspec-2021.10.1 huggingface-hub-0.0.19 multidict-5.2.0 pyyaml-6.0 sacremoses-0.0.46 tokenizers-0.10.3 transformers-4.12.2 xxhash-2.0.2 yarl-1.7.0
MIT
transformers_doc/pytorch/task_summary.ipynb
Magiccircuit/TreeMix
Sequence Classification Sequence classification is the task of classifying sequences according to a given number of classes. An example ofsequence classification is the GLUE dataset, which is entirely based on that task. If you would like to fine-tune amodel on a GLUE sequence classification task, you may leverage the :prefix_link:*run_glue.py*, :prefix_link:*run_tf_glue.py*, :prefix_link:*run_tf_text_classification.py* or :prefix_link:*run_xnli.py* scripts.Here is an example of using pipelines to do sentiment analysis: identifying if a sequence is positive or negative. Itleverages a fine-tuned model on sst2, which is a GLUE task.This returns a label ("POSITIVE" or "NEGATIVE") alongside a score, as follows:
from transformers import pipeline classifier = pipeline("sentiment-analysis") result = classifier("This film is good and every one loves it")[0] print(f"label: {result['label']}, with score: {round(result['score'], 4)}") result = classifier("The film is poor and I do not like it")[0] print(f"label: {result['label']}, with score: {round(result['score'], 4)}") result = classifier("This film is good but I do not like it")[0] print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)
MIT
transformers_doc/pytorch/task_summary.ipynb
Magiccircuit/TreeMix
Selecting Data from a Data Frame, Plotting, and Indexes Selecting Data Import Pandas and Load in the Data from **practicedata.csv**. Call the dataframe 'df'. Show the first 5 lines of the table.
import pandas as pd df = pd.read_csv('practicedata.csv') # overwrite this yourself df.head(5)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Lets walk through how to select data by row and column number using .iloc[row, column]
# Let's select the first row of the table first_row = df.iloc[0,:] first_row #now let's try selecting the first column of the table first_column = df.iloc[:, 0] #let's print the first five rows of the column first_column[:5]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice a few things: 1st - we select parts of a dataframe by its numeric position in the table using .iloc followed by two values in square brackets. 2nd - We can use ':' to indicate that we want all of a row or column. 3rd - The values in the square brackets are [row, column]. Our old friend from lists is back: **slicing**. We can slice in much the same way as lists:
upper_corner_of_table = df.iloc[:5,:5] upper_corner_of_table another_slice = df.iloc[:5, 5:14] another_slice
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Now let's select a column by its name
oil_prod = df['OIL_PROD'] #simply put the column name as a string in square brackets oil_prod[:8]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Let's select multiple columns
production_streams = df[['OIL_PROD', 'WATER_PROD', 'OWG_PROD']] # notice that we passed a list of columns production_streams.head(5)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Let's select data by **index**
first_rows = df.loc[0:5] # to select by row index, we pass the index(es) we want to select with .loc[row index] first_rows
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
We can also use loc to select rows and columns at the same time using .loc[row index, column index]
production_streams = df.loc[0:5, ['OIL_PROD', 'WATER_PROD', 'OWG_PROD']] production_streams
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Note that you can't mix positional selection and index selection (.iloc vs .loc)
error = df.loc[0:5, 0:5]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
When you are selecting data in data frames there is a lot of potential to change the **type** of your data let's see what the output types are of the various selection methods.
print(type(df)) print(type(df.iloc[:,0]))
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice how the type changes when we select the first column? A Pandas series is similar to a python dictionary, but there are important differences. You can call numerous functions like mean, sum, etc on a pandas series and unlike dictionaries pandas series have an index instead of keys and allows for different values to be associated with the same 'key' or in this case an index value. Let's try this with a row instead of a column.
print(type(df.iloc[0, :]))
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Rows also become series when a single one is selected. Let's try summing the water production really quick.
print(df['WATER_PROD'].sum())
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Try summing the oil and water production together below Lastly, lets see what type we get when we select multiple rows/columns
print(type(df.iloc[:5,:5]))
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
If we select multiple rows/columns we keep our dataframe type. This can be important in code as dataframes and series behave differently. This is a particular problem if you have an index with unexpected duplicate values and you are selecting something by index expecting a series, but you get multiple rows and have a dataframe instead. Fast Scalar Selection of a 'cell' of a dataframe There is are special functions for selecting scalar values in pandas. These functions are .at[] and .iat[]. These functions are much faster (60-70% faster) than .loc or .iloc when selecting a scalar value. Let's try them out.
#select the first value print(df.iat[0,0]) print(df.at[0,'API_NO14'])
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice that it works the same as .loc and .iloc, the only difference is that you must select one value.
print(df.iat[0:5, 1]) # gives an error since I tried to select more than one value
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Adding columns
# you can add a column by assigning it a starting value df['column of zeros'] = 0 # you can also create a column by adding columns (or doing anything else that results in a column of the same size) df['GROSS_LIQUID'] = df['OIL_PROD'] + df['WATER_PROD'] df.iloc[0:2, 30:]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Removing Columns
# remove columns via the .drop function df = df.drop(['column of zeros'], axis=1) df.iloc[0:2, 30:]
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Selecting Data with conditionals (booleans)
# We can use conditional statements (just like with if and while statements to find whether conditions are true/false in our data) # Let's find the months with no oil in wells months_with_zero_oil = df['OIL_PROD'] == 0 print(months_with_zero_oil.sum()) print(len(months_with_zero_oil)) # What does months with zero oil look like? months_with_zero_oil[:5] # Lets try to make a column out of months with zero oil df['zero_oil'] = months_with_zero_oil df.iloc[0:2, 30:] # Let's make the value of zero oil 10,000 whenever there is zero oil (no reason, just for fun) temp = df[months_with_zero_oil] temp['zero_oil'] = 10000.00 temp.head(3)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice the warning we recieved about setting data on a 'slice' of a dataframe. This is because when you select a piece of a dataframe, it doesn't (by default at least) create a new dataframe, it shows you a 'view' of the original data. This is true even if we assign that piece to a new variable like we did above. When we set the zero oil column to 10000, this could also affect the original dataframe. This is why the warning was given because this may or may not be what we want. Let's see if the original dataframe was affected...
df[months_with_zero_oil].head(5) temp.head(5)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
In this case we were protected from making changes to original dataframe, what if we want to change the original dataframe?
# Let's try this instead df.loc[months_with_zero_oil,'zero_oil'] = 10000.00 df[months_with_zero_oil].head(5)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
That got it! We were able to set values in the original dataframe using the 'boolean' series of months with zero oil. Finding, Changing, and Setting Data
# Find a column in a dataframe if 'API_NO14' in df: print('got it') else: print('nope, not in there') # If a column name is in a dataframe, get it for column in df: print(column) # Search through the rows of a table count = 0 for row in df.iterrows(): count += 1 print(row) if count == 1: break
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Notice that 'row' is **tuple** with the row index at 0 and the row series at 1
# Let's change WATER_INJ to 1 for the first row count = 0 for row in df.iterrows(): df.loc[row[0], 'WATER_INJ'] = 1 count += 1 if count == 1: break df[['WATER_INJ']].head(1)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Exercise: Fix the apis in the table All the apis have been converted to numbers and are missing the leading zero, can you add it back in and convert them to strings in a new column? Plotting Data First we need to import matplotlib and set jupyter notebook to display the plots
import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Let's select data related to the well: 04029270170000 and plot it
# Let's plot using the original API_NO14 Column for now df.loc[df['API_NO14'] == 4029270170000, 'OIL_PROD'].plot() # Those numbers are not super helpful, lets try a new index # lets copy the dataframe sorted_df = df.copy() # Convert dates to a 'datetime' type instead of string sorted_df['PROD_INJ_DATE'] = pd.to_datetime(df['PROD_INJ_DATE']) # Then we sort by production/injection date sorted_df = sorted_df.sort_values(by=['PROD_INJ_DATE']) # Then we set the row index to be API # and Date sorted_df.set_index(['API_NO14', 'PROD_INJ_DATE'], inplace=True, drop=False) sorted_df.head(2) # Lets select the well we want to plot by api # plot_df = sorted_df.loc[4029270170000] plot_df.head(5) # Now let's try plotting again plot_df['OIL_PROD'].plot()
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
Let's manipulate the plot and try different options
plot_df['OIL_PROD'].plot(logy=True) plot_df[['OIL_PROD', 'WATER_PROD']].plot(sharey=True, logy=True)
_____no_output_____
MIT
CRC Make A Thon Crash Course Lesson 8 Python Pandas Selection and Plotting.ipynb
acox84/crc_python_crash_course
网络科学理论****** 网络科学:使用NetworkX分析复杂网络******王成军 [email protected]计算传播网 http://computational-communication.com http://networkx.readthedocs.org/en/networkx-1.11/tutorial/
%matplotlib inline import networkx as nx import matplotlib.cm as cm import matplotlib.pyplot as plt import networkx as nx G=nx.Graph() # G = nx.DiGraph() # 有向网络 # 添加(孤立)节点 G.add_node("spam") # 添加节点和链接 G.add_edge(1,2) print(G.nodes()) print(G.edges()) # 绘制网络 nx.draw(G, with_labels = True)
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
WWW Data download http://www3.nd.edu/~networks/resources.htmWorld-Wide-Web: [README] [DATA]Réka Albert, Hawoong Jeong and Albert-László Barabási:Diameter of the World Wide Web Nature 401, 130 (1999) [ PDF ] 作业:- 下载www数据- 构建networkx的网络对象g(提示:有向网络)- 将www数据添加到g当中- 计算网络中的节点数量和链接数量
G = nx.Graph() n = 0 with open ('/Users/chengjun/bigdata/www.dat.gz.txt') as f: for line in f: n += 1 #if n % 10**4 == 0: #flushPrint(n) x, y = line.rstrip().split(' ') G.add_edge(x,y) nx.info(G)
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
描述网络 nx.karate_club_graph 我们从karate_club_graph开始,探索网络的基本性质。
G = nx.karate_club_graph() clubs = [G.node[i]['club'] for i in G.nodes()] colors = [] for j in clubs: if j == 'Mr. Hi': colors.append('r') else: colors.append('g') nx.draw(G, with_labels = True, node_color = colors) G.node[1] # 节点1的属性 G.edge.keys()[:3] # 前三条边的id nx.info(G) G.nodes()[:10] G.edges()[:3] G.neighbors(1) nx.average_shortest_path_length(G)
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
网络直径
nx.diameter(G)#返回图G的直径(最长最短路径的长度)
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
密度
nx.density(G) nodeNum = len(G.nodes()) edgeNum = len(G.edges()) 2.0*edgeNum/(nodeNum * (nodeNum - 1))
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
作业:- 计算www网络的网络密度 聚集系数
cc = nx.clustering(G) cc.items()[:5] plt.hist(cc.values(), bins = 15) plt.xlabel('$Clustering \, Coefficient, \, C$', fontsize = 20) plt.ylabel('$Frequency, \, F$', fontsize = 20) plt.show()
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
Spacing in Math ModeIn a math environment, LaTeX ignores the spaces you type and puts in the spacing that it thinks is best. LaTeX formats mathematics the way it's done in mathematics texts. If you want different spacing, LaTeX provides the following four commands for use in math mode:\; - a thick space\: - a medium space\, - a thin space\\! - a negative thin space 匹配系数
# M. E. J. Newman, Mixing patterns in networks Physical Review E, 67 026126, 2003 nx.degree_assortativity_coefficient(G) #计算一个图的度匹配性。 Ge=nx.Graph() Ge.add_nodes_from([0,1],size=2) Ge.add_nodes_from([2,3],size=3) Ge.add_edges_from([(0,1),(2,3)]) print(nx.numeric_assortativity_coefficient(Ge,'size')) # plot degree correlation from collections import defaultdict import numpy as np l=defaultdict(list) g = nx.karate_club_graph() for i in g.nodes(): k = [] for j in g.neighbors(i): k.append(g.degree(j)) l[g.degree(i)].append(np.mean(k)) #l.append([g.degree(i),np.mean(k)]) x = l.keys() y = [np.mean(i) for i in l.values()] #x, y = np.array(l).T plt.plot(x, y, 'r-o', label = '$Karate\;Club$') plt.legend(loc=1,fontsize=10, numpoints=1) plt.xscale('log'); plt.yscale('log') plt.ylabel(r'$<knn(k)$> ', fontsize = 20) plt.xlabel('$k$', fontsize = 20) plt.show()
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication
Degree centrality measures.(度中心性)* degree_centrality(G) Compute the degree centrality for nodes.* in_degree_centrality(G) Compute the in-degree centrality for nodes.* out_degree_centrality(G) Compute the out-degree centrality for nodes.* closeness_centrality(G[, v, weighted_edges]) Compute closeness centrality for nodes.* betweenness_centrality(G[, normalized, ...]) Betweenness centrality measures.(介数中心性)
dc = nx.degree_centrality(G) closeness = nx.closeness_centrality(G) betweenness= nx.betweenness_centrality(G) fig = plt.figure(figsize=(15, 4),facecolor='white') ax = plt.subplot(1, 3, 1) plt.hist(dc.values(), bins = 20) plt.xlabel('$Degree \, Centrality$', fontsize = 20) plt.ylabel('$Frequency, \, F$', fontsize = 20) ax = plt.subplot(1, 3, 2) plt.hist(closeness.values(), bins = 20) plt.xlabel('$Closeness \, Centrality$', fontsize = 20) ax = plt.subplot(1, 3, 3) plt.hist(betweenness.values(), bins = 20) plt.xlabel('$Betweenness \, Centrality$', fontsize = 20) plt.tight_layout() plt.show() fig = plt.figure(figsize=(15, 8),facecolor='white') for k in betweenness: plt.scatter(dc[k], closeness[k], s = betweenness[k]*1000) plt.text(dc[k], closeness[k]+0.02, str(k)) plt.xlabel('$Degree \, Centrality$', fontsize = 20) plt.ylabel('$Closeness \, Centrality$', fontsize = 20) plt.show()
_____no_output_____
MIT
code/17.networkx.ipynb
nju-teaching/computational-communication