markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Input UtilitiesNow we will need some utility functions for the inputs of our model. Let's start off with our image input transform function. We will separate out the normalization step from the transform in order to view the original image.
image_size = 448 # scale image to given size and center central_fraction = 1.0 transform = get_transform(image_size, central_fraction=central_fraction) transform_normalize = transform.transforms.pop()
/opt/homebrew/lib/python3.7/site-packages/torchvision/transforms/transforms.py:210: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. warnings.warn("The use of the transforms.Scale transform is deprecated, " +
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Now for the input question, we will need an encoding function (to go from words -> indices):
def encode_question(question): """ Turn a question into a vector of indices and a question length """ question_arr = question.lower().split() vec = torch.zeros(len(question_arr), device=device).long() for i, token in enumerate(question_arr): index = token_to_index.get(token, 0) vec[i] = index return vec, torch.tensor(len(question_arr), device=device)
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Baseline Inputs The insights API utilises captum's attribution API under the hood, hence we will need a baseline for our inputs. A baseline is (typically) a neutral output to reference in order for our attribution algorithm(s) to understand which features are important in making a prediction (this is very simplified explanation, 'Remark 1' in the [Integrated Gradients paper](https://arxiv.org/pdf/1703.01365.pdf) has an excellent explanation on why they must be utilised).For images and for the purpose of this tutorial, we will let this baseline be the zero vector (a black image).
def baseline_image(x): return x * 0
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
For sentences, as done in the multi-modal VQA tutorial, we will use a sentence composed of padded symbols.We will also require to pass our model through the [`configure_interpretable_embedding_layer`](https://captum.ai/api/utilities.html?highlight=configure_interpretable_embedding_layercaptum.attr._models.base.configure_interpretable_embedding_layer) function, which separates the embedding layer and precomputes word embeddings. To put it simply, this function allows us to precompute and give the embedding vectors directly to our model, which will allow us to reference the words associated to particular embeddings (for visualization purposes).
interpretable_embedding = configure_interpretable_embedding_layer( vqa_resnet, "module.text.embedding" ) PAD_IND = token_to_index["pad"] token_reference = TokenReferenceBase(reference_token_idx=PAD_IND) def baseline_text(x): seq_len = x.size(0) ref_indices = token_reference.generate_reference(seq_len, device=device).unsqueeze( 0 ) return interpretable_embedding.indices_to_embeddings(ref_indices).squeeze(0) def input_text_transform(x): return interpretable_embedding.indices_to_embeddings(x)
../captum/attr/_models/base.py:168: UserWarning: In order to make embedding layers more interpretable they will be replaced with an interpretable embedding layer which wraps the original embedding layer and takes word embedding vectors as inputs of the forward function. This allows to generate baselines for word embeddings and compute attributions for each embedding dimension. The original embedding layer must be set back by calling `remove_interpretable_embedding_layer` function after model interpretation is finished. after model interpretation is finished."""
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Using the Insights APIFinally we have reached the relevant part of the tutorial.First let's create a utility function to allow us to pass data into the insights API. This function will essentially produce `Batch` objects, which tell the insights API what your inputs, labels and any additional arguments are.
def vqa_dataset(image, questions, targets): img = Image.open(image).convert("RGB") img = transform(img).unsqueeze(0) for question, target in zip(questions, targets): q, q_len = encode_question(question) q = q.unsqueeze(0) q_len = q_len.unsqueeze(0) target_idx = answer_to_index[target] yield Batch( inputs=(img, q), labels=(target_idx,), additional_args=q_len )
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Let's create our `AttributionVisualizer`, to do this we need the following:- A score function, which tells us how to interpret the model's output vector- Description of the input features given to the model- The data to visualize (as described above)- Description of the output (the class names), in our case this is our answer words In our case, we want to produce a single answer output via softmax
def score_func(o): return F.softmax(o, dim=1)
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
The following function will convert a sequence of question indices to the associated question words for visualization purposes. This will be provided to the `TextFeature` object to describe text features.
def itos(input): return [question_words[int(i)] for i in input.squeeze(0)]
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Let's define some dummy data to visualize using the function we declared earlier.
dataset = vqa_dataset("./img/vqa/elephant.jpg", ["what is on the picture", "what color is the elephant", "where is the elephant" ], ["elephant", "gray", "zoo"] )
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Now let's describe our features. Each feature requires an input transformation function and a set of baselines. As described earlier, we will use the black image for the image baseline and a padded sequence for the text baseline.The input image will be transformed via our normalization transform (`transform_normalize`).Our input text will need to be transformed into embeddings, as it is a sequence of indices. Our model only accepts embeddings as input, as we modified the model with `configure_interpretable_embedding_layer` earlier.We also need to provide how the input text should be transformed in order to be visualized, which will be accomplished through the `itos` function, as described earlier.
features = [ ImageFeature( "Picture", input_transforms=[transform_normalize], baseline_transforms=[baseline_image], ), TextFeature( "Question", input_transforms=[input_text_transform], baseline_transforms=[baseline_text], visualization_transform=itos, ), ]
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Let's define our AttributionVisualizer object with the above parameters and our `vqa_resnet` model.
visualizer = AttributionVisualizer( models=[vqa_resnet], score_func=score_func, features=features, dataset=dataset, classes=answer_words, )
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
And now we can visualize the outputs produced by the model.As of writing this tutorial, the `AttributionVisualizer` class utilizes captum's implementation of [integrated gradients](https://captum.ai/docs/algorithmsintegrated-gradients) ([`IntegratedGradients`](https://captum.ai/api/integrated_gradients.html)).
visualizer.render() # show a screenshot if using notebook non-interactively from IPython.display import Image Image(filename='img/captum_insights_vqa.png')
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Finally, since we are done with visualization, we will revert the change to the model we made with `configure_interpretable_embedding_layer`. To do this, we will invoke the `remove_interpretable_embedding_layer` function.
remove_interpretable_embedding_layer(vqa_resnet, interpretable_embedding)
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Simple Widget Introduction What are widgets? Widgets are eventful python objects that have a representation in the browser, often as a control like a slider, textbox, etc. What can they be used for? You can use widgets to build **interactive GUIs** for your notebooks. You can also use widgets to **synchronize stateful and stateless information** between Python and JavaScript. Using widgets To use the widget framework, you need to import `ipywidgets`.
import ipywidgets as widgets
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
repr Widgets have their own display `repr` which allows them to be displayed using IPython's display framework. Constructing and returning an `IntSlider` automatically displays the widget (as seen below). Widgets are displayed inside the output area below the code cell. Clearing cell output will also remove the widget.
widgets.IntSlider()
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
display() You can also explicitly display the widget using `display(...)`.
from IPython.display import display w = widgets.IntSlider() display(w)
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Multiple display() calls If you display the same widget twice, the displayed instances in the front-end will remain in sync with each other. Try dragging the slider below and watch the slider above.
display(w)
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Why does displaying the same widget twice work? Widgets are represented in the back-end by a single object. Each time a widget is displayed, a new representation of that same object is created in the front-end. These representations are called views.![Kernel & front-end diagram](images/WidgetModelView.png) Widget properties All of the IPython widgets share a similar naming scheme. To read the value of a widget, you can query its `value` property.
w = widgets.IntSlider() display(w) w.value
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Similarly, to set a widget's value, you can set its `value` property.
w.value = 100
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Keys In addition to `value`, most widgets share `keys`, `description`, and `disabled`. To see the entire list of synchronized, stateful properties of any specific widget, you can query the `keys` property. Generally you should not interact with properties starting with an underscore.
w.keys
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Shorthand for setting the initial values of widget properties While creating a widget, you can set some or all of the initial values of that widget by defining them as keyword arguments in the widget's constructor (as seen below).
widgets.Text(value='Hello World!', disabled=True)
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Linking two similar widgets If you need to display the same value two different ways, you'll have to use two different widgets. Instead of attempting to manually synchronize the values of the two widgets, you can use the `link` or `jslink` function to link two properties together (the difference between these is discussed in [Widget Events](08.00-Widget_Events.ipynb)). Below, the values of two widgets are linked together.
a = widgets.FloatText() b = widgets.FloatSlider() display(a,b) mylink = widgets.link((a, 'value'), (b, 'value'))
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Unlinking widgets Unlinking the widgets is simple. All you have to do is call `.unlink` on the link object. Try changing one of the widgets above after unlinking to see that they can be independently changed.
# mylink.unlink()
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
`observe` changes in a widget valueAlmost every widget can be observed for changes in its value that trigger a call to a function. The example below is the slider from the first notebook of the tutorial. The `HTML` widget below the slider displays the square of the number.
slider = widgets.FloatSlider( value=7.5, min=5.0, max=10.0, step=0.1, description='Input:', ) # Create non-editable text area to display square of value square_display = widgets.HTML(description="Square: ", value='{}'.format(slider.value**2)) # Create function to update square_display's value when slider changes def update_square_display(change): square_display.value = '{}'.format(change.new**2) slider.observe(update_square_display, names='value') # Put them in a vertical box widgets.VBox([slider, square_display])
_____no_output_____
BSD-3-Clause
notebooks/03.00-Widget_Basics.ipynb
mbektasbbg/tutorial
Statistical study of alternative blocks/chainsAnalysis by IsthmusCrypto for the [Monero Archival Project](https://github.com/mitchellpkt/monero_archival_project), a product of *Noncesense-research-lab*Contributors: [NeptuneResearch](https://github.com/neptuneresearch), [Nathan](https://github.com/neffmallon), [IsthmusCrypto](https://github.com/mitchellpkt)This notebook investigates various phenomena related to mining of orphaned blocks and alt chains. This data was collected by a MAP archival node, running a customized daemon modified by NeptuneResearch.You can jump to the **Results** section if you want to skip data import and cleaning/engineering. BackgroundSee CuriousInventor's 5-minute [non-technical introduction](https://www.youtube.com/watch?v=t5JGQXCTe3c) to Bitcoin, for a review of why forks in the blockchain can occur naturally, and how they are resolved by the concensus process.We will see several instances of these benign latency-induced forks, along with different alt-chain events from different mechanisms.Monero aims for a 2-minute block time, by adjusting the 'difficulty' for the solutions. Since there is a heavy element of chance involved in mining, some intervals between blocks will be shorter/longer than 2-minutes. ConventionsThe data in this analysis is already separated into blocks that became the main chain ("block0") and blocks were part of abandoned chains ("block1"). The block0 data is not recorded at heights with an alternate block. (*note to self, check for exceptions*)The "random" field exists to differentiate each time a copy of a block is received (e.g. multiple times from multiple peers). Being able to distinguish between instances is important for latency studies, but not for the scope of this notebook, so it is dropped and de-duped. Preliminaries Where are files saved?
block0s_relative_path = 'data_for_analysis/block0s.txt.backup' # without the earlier stuff tacked on block1s_relative_path = 'data_for_analysis/block1s.txt'
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Import libraries
from copy import copy import time import datetime import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import scipy as sp
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Disable auto-scroll
%%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; }
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Import and pre-process dataTwo separate pandas DataFrames are used, `b0s` for main chain, and `b1s` for blocks that were abandoned Read in from CSV
# Read in the raw data from CSV files b0s = pd.read_csv(block0s_relative_path) b1s = pd.read_csv(block1s_relative_path)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Sort the rows by height
b0s = b0s.sort_values('block_height') b1s = b1s.sort_values('block_height')
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Glance at the data
display(b0s.describe()) display(b1s.describe()) b0s.head()
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
De-dupeBecause the MAP nodes record *every* instance that a block was received, most heights contain multiple copies from different peers. Each copy is identical, and stamped with a different `block_random`For the purposes of this analysis/notebook, we only need one copy of each block.Take a peek for current duplicaties:
b1s.head(20)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
First we remove the `block_random` *column*, so that multiple copies become indistinguishable.
b0s.drop(['block_random'],1,inplace=True) b1s.drop(['block_random'],1,inplace=True)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Then drop the duplicate *rows*
b0s=b0s.drop_duplicates() b1s=b1s.drop_duplicates() b1s.head(20)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Feature EngineeringRather than looking at raw block timestamps, we'll want to study derived features like the time between blocks, alt chain lengths, etc. Generate difference columns`delta_time` is the timestamp difference between two blocks. The `merlin_block` label is applied when a block's miner-reported timestamp precedes the one in the block prior. `delta_height` marks the difference in height between subsequent rows of the DataFrame, used as an imperfect proxy for identifying breaks between different alt chains.
b0s['delta_time'] = b0s['block_time']-b0s['block_time'].shift() b1s['delta_time'] = b1s['block_time']-b1s['block_time'].shift() b0s['merlin_block'] = 0 # unnecessary? b1s['merlin_block'] = 0 # unnecessary? b0s['merlin_block'] = b0s['delta_time'].transform(lambda x: x < 0).astype(int) b1s['merlin_block'] = b1s['delta_time'].transform(lambda x: x < 0).astype(int) b0s['delta_height'] = b0s['block_height']-b0s['block_height'].shift() b1s['delta_height'] = b1s['block_height']-b1s['block_height'].shift()
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Replace delta_height != 1 to NaNThe first block in a alt chain (or following a gap in `b0s`) will have an anomalous `delta_time` and `delta_height`. We convert these to NaNs so that we can hang on to orphaned blocks and still have the start of alt chains included in our data set.
def mapper(x): if x == 1: return x else: return np.nan b0s.delta_height = b0s.delta_height.map(mapper, na_action='ignore') b0s.loc[b0s.delta_height.isnull(),('delta_time')] = np.nan b1s.delta_height = b1s.delta_height.map(mapper, na_action='ignore') b1s.loc[b1s.delta_height.isnull(),('delta_time')] = np.nan b1s.head(20)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
What are we left with?
print('Retained ' + str(len(b0s)) + ' main-chain blocks') print('Retained ' + str(len(b1s)) + ' alt-chain blocks')
Retained 17925 main-chain blocks Retained 241 alt-chain blocks
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Label alt chains Initialize new labels and features:- `alt_chain_ID` assigns an arbitrary integer to identify each alt chain. NOTE: there is a (bad) implicit assumption here alternate blocks at two subsequent heights belong to the same chain. Will be fixed in versions with linked blocks.- `alt_chain_length` records the length of the alt chain (up to each point, not retroactively adjusted)- `alt_chain_time` records how long a given chain has been growing (based on spoofable miner-reported timestamp)- `terminal_block` labels the 'end' block in each alt chain. Subject to artifacts from the limitation noted for alt_chain_ID.
b1s['alt_chain_ID'] = 0 b1s['alt_chain_length'] = b1s['block_height']-b1s['block_height'].shift() # how long did this alt-chain get? b1s['alt_chain_time'] = 0 b1s['terminal_block']= 0 # is this the last block in the alt-chain? b1s = b1s.reset_index() b1s.drop(['index'], 1, inplace=True) b1s.loc[0,('alt_chain_length')] = 1 # since we don't know what preceded b1s.loc[b1s.delta_time.isnull(),('alt_chain_length')] = 1 # first block in chain b1s.head(20)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Add new infoCalculate accumulated alt chain length/time, and label terminal blocks.Note that initialization of field `alt_chain_length` produces some value > 1 for the first block, and = 1 for subsequent blocks. Below, this is converted into actual alt chain lengths.Convention: starting alt chains at length 0
alt_chain_counter = -1 # Loop over rows = blocks for index, row in b1s.iterrows(): # If you want extra details: # print('index: ' + str(index) + ' this_row_val: ' + str(this_row_val)) # Check whether this is the first block in the chain, or further down if str(row['delta_height']) == 'nan': # first block in the alt-chain b1s.loc[index,('alt_chain_length')] = 1 b1s.loc[max(0,index-1),('terminal_block')] = 1 # if this is the first block, the last one was terminal on the previous chain alt_chain_counter += 1 # increment the counter b1s.loc[index,('alt_chain_ID')] = alt_chain_counter # mark the counter else: # subsequent block if index > 0: b1s.loc[index, ('alt_chain_length')] = b1s.alt_chain_length[index-1]+1 delta_t_seconds = b1s.block_time[index] - b1s.block_time[index-1] b1s.loc[index, ('alt_chain_time')] = b1s.alt_chain_time[index-1] + delta_t_seconds b1s.loc[index, ('alt_chain_ID')] = alt_chain_counter b1s.head(20)
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Results General block interval studyLet's take a look at the intervals between blocks, for both the main and alt chains. What is the *average* interval between blocks?
b0_mean_time_s = np.mean(b0s.delta_time) b1_mean_time_s = np.mean(b1s.delta_time) print('Main-chain blocks come with mean time: ' + str(round(b0_mean_time_s)) + ' seconds = ' + str(round(b0_mean_time_s/60,1)) + ' min') print('alt-chain blocks come with mean time: ' + str(round(b1_mean_time_s)) + ' seconds = ' + str(round(b1_mean_time_s/60,1)) + ' min')
Main-chain blocks come with mean time: 120 seconds = 2.0 min alt-chain blocks come with mean time: 6254 seconds = 104.2 min
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
The main chain blocks are 2 minutes apart, on average. This is what we expect, and is a good validation.The alt chain blocks come at VERY long intervals. The (not-representative) average is almost two hours! Visualize block discovery time
fig = plt.figure(figsize=(10,10),facecolor='white') plt.style.use('seaborn-white') plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.4) plt.rcParams['ytick.labelsize'] = 20 plt.rcParams['xtick.labelsize'] = 20 plt.rcParams['font.size'] = 20 ax1 = fig.add_subplot(211) ax1.set_xlabel('time (seconds)') ax1.set_ylabel('occurrences') ax1.set_title('interval between discoveries of main-chain blocks') plt.hist(b0s.delta_time[b0s.delta_time.notnull()], bins=range(0,600,10)) ax1.set_xlim((0,600)) plt.axvline(x=120, c='red', linestyle=':', linewidth=5) plt.legend(('120 s = target block time','block interval histogram')) ax2 = fig.add_subplot(212) plt.hist(b1s.delta_time[b1s.delta_time.notnull()], bins=range(0,50000,250)) ax2.set_xlabel('time (seconds)') ax2.set_ylabel('occurrences') ax2.set_title('interval between discoveries of sequential alt-chain blocks') ax2.set_xlim((0,50000)) plt.axvline(x=120, c='red', linestyle=':', linewidth=5) plt.legend(('120 s = target block time','block interval histogram')) pass
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Observed wait times**Main chain:**The top histogram (main chain) shows roughly the distribution that we would expect: long-tailed with a mean of 2 minutes. There seems to be some skew around 60 seconds, which is peculiar. Ittay Eyal and Emin Gün Sirer [point out](http://hackingdistributed.com/2014/01/15/detecting-selfish-mining/) that "One could detect [[selfish mining](https://arxiv.org/pdf/1311.0243.pdf)] by looking at the timestamps on successive blocks in the blockchain. Since mining is essentially an independent random process, we'd expect the interblock time gap to be exponentially distributed. Any deviation from this expectation would be suggestive of selfish mining"**alt chains:**The alt-chain histogram shows mining on an entirely different timescale. Blocks are released very belatedly, often hours behind the preceding height.Note the x-scale in seconds, and the fact that the majority of alt-chain blocks are minutes, hours, or days behind the preceding block. These blocks have NO chance of ever becoming the main chain. Something unusual is happening, and this figure captures multiple phenomena:- Most of the natural latency-induced forks would appear in the bin around 120 seconds (2 minutes)- Some of the single blocks with extreme times (e.g. ~ 50000 s = 1 month) are probably due to somebody solo mining a block by accident or for fun. - There are many strong chains with blocks on the order of that are too long (e.g. 15 blocks) for the first case, and have too much hashpower for the second case. Expected waiting times How long do we expect to wait between blocks?? "Events that occur independently with some average rate are modeled with a Poisson process. **The waiting times between k occurrences of the event are Erlang distributed.** The related question of the number of events in a given amount of time is described by the Poisson distribution." ... from [Wikipedia](https://en.wikipedia.org/wiki/Erlang_distribution). *Credit: Erlang analysis started by Nathan*
dt = b0s.delta_time[b0s.delta_time > 0] sns.distplot(dt, bins=np.linspace(0, 1000,100)) mean = np.nanmean(dt) stddev = np.std(dt) lam = mean/stddev**2 k = 2000 x = range(10000) shift = 1975 y = sp.stats.erlang.pdf(x, k, lam) x_plot = [xi-shift for xi in x] plt.plot(x_plot, y, color='r', label = "Erlong") plt.xlim(0, 750) plt.title("Distribution of times to solve block.") plt.legend(('Observed MRT','Erlong')) #plt.savefig("dist") #plt.figure() #plt.plot(x, sp.stats.erlang.pdf(x, k, lam, 100000), color='r', label = "Erlong") pass
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
If we are correct that Erlang statistics theoretically describe the situation, there are two main reasons why the distribution does not match observations:- IsthmusCrypto used arbitrary (and presumably wrong) parameters to make the fit line up- This analysis used miner-reported timestamps (MRTs) which are known to be wrong (see Merlin blocks, below). The node-receipt timestamps (NRTs) will accurately reflect the block announcement intervals. How many alternative blocks did we see that were single orphaned blocks, and how many blocks that were part of longer alternative chains.
orph_block_cut1 = copy(b1s[b1s.alt_chain_length==1]) orph_block_cut2 = copy(orph_block_cut1[orph_block_cut1.terminal_block==1]) experiment_time_d = (max(orph_block_cut2.block_time) - min(orph_block_cut2.block_time))/86400 # seconds per day experiment_time_height = (max(orph_block_cut2.block_height) - min(orph_block_cut2.block_height)) num_of_orphans = len(orph_block_cut2) num_alt_chain_blocks = len(b1s)-num_of_orphans orphans_per_day = num_of_orphans/experiment_time_d heights_per_orphan = experiment_time_height/num_of_orphans heights_per_side_block = experiment_time_height/num_alt_chain_blocks print('Experiment lasted for:' + str(round(experiment_time_d)) + ' days = ' + str(experiment_time_height) + ' heights') print('Observed ' + str(num_of_orphans) + ' single orphan blocks') print('Observed ' + str(num_alt_chain_blocks) + ' blocks assocated with longer alternative chains') print('This corresponds to 1 natural orphan per ' + str(round(heights_per_orphan)) + ' heights') print('This corresponds to 1 alt-chain-related block per ' + str(round(heights_per_orphan)) + ' heights')
Experiment lasted for:85 days = 60171 heights Observed 22 single orphan blocks Observed 219 blocks assocated with longer alternative chains This corresponds to 1 natural orphan per 2735 heights This corresponds to 1 alt-chain-related block per 2735 heights
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Assuming that the longer side chains are a different phenomenon that does not impact the frequency of natural orphaned blocks, how often would we expect to see a triple block?
monero_blocks_per_day = 720 heights_per_triple = heights_per_orphan*heights_per_side_block triple_frequency_days = heights_per_triple/monero_blocks_per_day print('Statistically we expect to see a triple block once per ' + str(round(heights_per_triple)) + ' blocks (' + str(round(triple_frequency_days)) + ' days)')
Statistically we expect to see a triple block once per 751463 blocks (1044 days)
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
print('Observed: ' + str(number_of_orphans) + ' over the course of ' + str(round(experiment_time_d)) + ' days.')print('This corresponds to ' + str(round(orphans_per_day,3)) + ' orphans per day.')print('Over ' + str(experiment_time_height) + ' blocks, averaged:')print(str(round(orphans_per_height,5)), ' orphans per height')print(str(round(1/orphans_per_height)) + ' blocks per orphan') Merlin BlocksCheck it out, there appear to be time-traveling blocks in the main chain!These plots are based off of the miner-reported timestamps (MRTs), which are spoofable. This phenomenon will probably disappear with node-receipt timestamps (NRTs)...
fig = plt.figure(figsize=(10,5),facecolor='white', dpi=300) plt.style.use('seaborn-white') plt.rcParams['ytick.labelsize'] = 20 plt.rcParams['xtick.labelsize'] = 20 plt.rcParams['font.size'] = 20 plt.hist(b0s.delta_time.dropna(), bins=np.linspace(-500,1000,100)) plt.xlabel('time between blocks (seconds)') plt.ylabel('occurrences') plt.title('Block interval [according to miner timestamp] in sec') plt.axvline(x=120, c='red', linestyle=':', linewidth=5) plt.legend(('120 s = target block time','block interval histogram')) pass print(str(round(len(b0s.delta_time[b0s.delta_time<0])/len(b0s.delta_time)*100,2)) + '% of blocks on the main chain were delivered from the future.')
2.68% of blocks on the main chain were delivered from the future.
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
**Time-traveling blocks:**About 2.5% of blocks on the main chain arrive with a miner timestamp BEFORE the timestamp of the block prior. This conclusively shows that miner-reported timestamps cannot be trusted, and further analysis must rely on node-reported timestamps. Direction of time travelLet `M` be the height of a Merlin block, meaning that `[time(M) < time(M-1)]`. This could be caused by :- block `M-1` being mined with a false late timestamp- block `M` being mined with a false early timestampLet's take a look at which it could be, by looking at the discovery times of the blocks before and after the Merlin blocks.
# Indexing M_block_inds = b0s.index[b0s.merlin_block== 1].tolist() M_parent_inds = [x - 1 for x in M_block_inds] M_child_inds = [x + 1 for x in M_block_inds] fig = plt.figure(figsize=(10,10)) plt.style.use('seaborn-white') plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.4) plt.rcParams['ytick.labelsize'] = 20 plt.rcParams['xtick.labelsize'] = 20 plt.rcParams['font.size'] = 20 plt.title('MRT block interval for Merlin blocks') plt.xlabel('block discovery time (s)') plt.ylabel('occurrences') plt.hist(b0s.delta_time[M_block_inds].dropna()) fig = plt.figure(figsize=(10,10)) plt.style.use('seaborn-white') plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.4) plt.rcParams['ytick.labelsize'] = 20 plt.rcParams['xtick.labelsize'] = 20 plt.rcParams['font.size'] = 18 plt.title('MRT-based interval for heights before/after Merlin blocks') plt.xlabel('block discovery time (s)') plt.ylabel('occurrences') #plt.hist(b0s.delta_time.dropna(), alpha=0.5, bins=np.linspace(-500,500,500)) binnum = 20 plt.hist(b0s.delta_time[M_parent_inds].dropna(), alpha=0.6, bins=np.linspace(-500,500,binnum)) # gets a different warning plt.hist(b0s.delta_time[M_child_inds].dropna(), alpha=0.6, bins=np.linspace(-500,500,binnum)) # gets a different warning plt.legend(('for height M-1', 'for height M+1')) pass
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
From the bottom plot, we notice that the blocks *preceding* (according to height) Merlin blocks to have mostly arrived on schedule (since, qualitatively at least, the interval distribution for M-1 blocks matches the interval distribution for all blocks)However, many of the blocks *following* (according to height) Merlin blocks arrive conspicuously late...! **This suggests that many of the Merlin blocks appear to be time traveling because their miner-reported timestamp is too early** (not because the block at height M-1 was stamped too late) Fishing for multi-MerlansInterestingly, the above plot shows Merlin blocks that follow Merlin blocks (and are followed by Merlin blocks)! Let's fish some of these up for visual inspection, since this would mean multiple timestamps moving in reverse...
b0s_M = copy(b0s[b0s.merlin_block==1]) del b0s_M['delta_time'] # b0s_M['delta_time'] = b0s_M['block_time']-b0s_M['block_time'].shift() b0s_M['delta_height'] = b0s_M['block_height']-b0s_M['block_height'].shift() ################################################ ## Sticking a pin in this section for now..... ## Not sure why 1589928 is flagged as a Merlin?
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Investigate alt chainsLet's look at alt chains. The following plots will show how quickly each chain grew. How long do these alt chains persist?We expect lots of alt chains with length 1 or 2 from natural causes. Is anybody out there dumping mining power into longer altchains? We'll conaltr 'longer' in terms of height (top plot), and in terms of time (bottom plot)
fig = plt.figure(figsize=(10,10)) plt.style.use('seaborn-white') plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.4) plt.rcParams['ytick.labelsize'] = 20 plt.rcParams['xtick.labelsize'] = 20 plt.rcParams['font.size'] = 20 ax1 = fig.add_subplot(211) ax1.set_xlabel('alt-chain length') ax1.set_ylabel('frequency') ax1.set_title('How long are the alt chains? (length)') plt.hist(b1s.alt_chain_length[b1s.terminal_block==1], bins = range(0,75)) ax1.autoscale(enable=True, axis='x', tight=True) ax2 = fig.add_subplot(212) plt.hist(b1s.alt_chain_time[b1s.terminal_block==1],bins=range(0,65000,2500)) ax2.set_xlabel('time (seconds)') ax2.set_ylabel('frequency') ax2.set_title('How long are the alt chains? (time)') ax2.autoscale(enable=True, axis='x', tight=True) pass
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Unexpectedly, there are lots of alt chains being mined 10, 20, and 30 blocks deep (top plot).Some of these futile alt chains are mined for weeks (bottom plot)Highly unnatural... A closer look at the growth of alt chains It is time to individually inspect different alt chains, and see how fast they were produced. The plots below fingerprint the growth of each alt chain.Each chain is colored/numbered differently. Each point shows a single block: the x-axis position indicates how long alt chain has grown, and the y-axis position indicates the *cumulative* time that has gone into mining that particular alt chain (calculated as the difference between the timestamp on this block and its first block).The speed with which a given entity can produce blocks for their altchain is proportional to their hash power. This can be used to identify distinct signatures of different phenomena or entities! Two different long alt chains that were mined on the same equipment will show up together on these plots, assuming that their hashrate hasn't changed between runs.The red line shows 2-minutes per block, so any entity producing blocks near that speed can feasibly overtake (and become) the main chain. The further a alt chain is from the red line, the more astronomically improbable this becomes.
# Let's take a look at the first 20 blocks: max_chain_length = 20 max_chain_time = 25000 norm_block_time = 120 # seconds fig = plt.figure(figsize=(8,8), dpi=100) plt.style.use('seaborn-white') plt.rcParams['ytick.labelsize'] = 15 plt.rcParams['xtick.labelsize'] = 15 plt.rcParams['font.size'] = 15 plt.scatter(b1s.alt_chain_length, b1s.alt_chain_time+1, c=b1s.alt_chain_ID, cmap='tab20') #fig.suptitle('Looking at alt chain lengths and time, all blocks') plt.xlabel('Nth block of the alt chain') plt.ylabel('Accumulated time on this alt chain (seconds)') plt.xlim((0,max_chain_length)) plt.ylim((0,max_chain_time)) plt.title('Growth of alt chains') for i, txt in enumerate(b1s.alt_chain_ID): # print(i) X = b1s.alt_chain_length[i] Y = b1s.alt_chain_time[i] S = b1s.alt_chain_ID[i] # print("X = " + str(X), " // Y = " + str(Y) + " // S = " + str(S)) if i > 0 and X <= max_chain_length and Y <= max_chain_time: plt.text(X,Y,S) # Add on a regular rate plt.plot((0,max_chain_length), (0, max_chain_length*norm_block_time), c='red', linewidth=4, linestyle=':') pass
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
**Wow** there are several interesting things to note:Several of the alt chains, separated by weeks, show the exact same signature in hashrate (e.g. 5 and 11) and presumably were produced by the same equipmentalt chain 10 produced the first 8 blocks at approximately 2 minutes per block! This could indicate two things:- A alt chain came within a razor's edge of overtaking the main chain- A alt chain DID overtake the main chain (!!!) so the original version is marked as "alternate" hereSomething seems to cause the chains to lose steam about 7 or 8 blocks in. This could be a coincidence from looking at a small number of examples, but seems prominent in 5, 10, 11, 13. Equipment overheating??Who could have that much hash power? And these are all since July after we were supposedly rid of ASICs.Absurdly, one of the alt chains was 70 blocks long, with an average of a four hour block discovery time. - To reliably mine at ~ 4 hr/block the entity must have around 1% of total network hashrate!- This single alt chain would have used around 40,000 EUR worth of energy! That is not something that an unlucky amateur miner would accidentally overlook. Here's a zoomed-out version of the above plot:
fig = plt.figure(figsize=(4,4), dpi=100) plt.scatter(b1s.alt_chain_length, b1s.alt_chain_time+1, c=b1s.alt_chain_ID, cmap='tab10') fig.suptitle('Looking at alt chain lengths and time, all blocks') plt.xlabel('Nth block of the alt chain') plt.ylabel('Accumulated time (seconds)') plt.axis('tight') pass
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Let's look at these chains in terms of a (very rough and noisy) estimate of their hashpower, which is inversely proportional the length of time between discovering blocks.This can be normalized by the network discovery time (average 120s) to the alt chain's hashrate relative to the network-wide hashrate. `fraction_of_network_hashrate = [(alt chain discovery time)/(normal time = 120 s)]^(-1)`This is a **really** noisy proxy, given the element of chance in block discovery. However, it can be seen that some chains like 10 and 13 consistently perform as though they have a *significant* amount of hash power. Don't stare at this plot for too long, it needs smoothing and is better described by summary statistics that follow later.
# Let's take a look at the first 20 blocks: max_chain_length = 20 norm_block_time = 120 # seconds max_prop = 2 fig = plt.figure(figsize=(8,8), dpi=100) plt.style.use('seaborn-white') plt.rcParams['ytick.labelsize'] = 15 plt.rcParams['xtick.labelsize'] = 15 plt.rcParams['font.size'] = 15 plt.scatter(b1s.alt_chain_length, np.reciprocal(b1s.delta_time/norm_block_time), c=b1s.alt_chain_ID, cmap='tab20') #fig.suptitle('Looking at alt chain lengths and time, all blocks') plt.xlabel('Nth block of the alt chain') plt.ylabel('Hashrate relative to network total (seconds)') plt.xlim((0,max_chain_length)) plt.ylim((0,max_prop)) plt.title('Rough estimate of hashrate') for i, txt in enumerate(b1s.alt_chain_ID): # print(i) X = b1s.alt_chain_length[i] Y = np.reciprocal(b1s.delta_time[i]/120) S = b1s.alt_chain_ID[i] # print("X = " + str(X), " // Y = " + str(Y) + " // S = " + str(S)) if i > 0 and X <= max_chain_length and Y <= max_prop and Y > 0: plt.text(X,Y,S) pass # Add on a regular rate plt.axhline(y=1, c='red', linestyle=':', linewidth=5) pass
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Ratio of chain length to chain timeFor each chain, we calculate the total chain_length/chain_time to produce the average blocks per secondThis should be directly proportional to the hashrate being used to mine that chain
b1s_terminal = copy(b1s[b1s.terminal_block==1]) b1s_terminal['average_speed'] = np.nan b1s_terminal['fraction_of_network_hashrate'] = np.nan # Loop over rows = blocks for index, row in b1s_terminal.iterrows(): if b1s_terminal.alt_chain_time[index] > 0: b1s_terminal.loc[index,('average_speed')] = b1s_terminal.alt_chain_length[index]/b1s_terminal.alt_chain_time[index] b1s_terminal.loc[index,('fraction_of_network_hashrate')] = b1s_terminal.average_speed[index]*120 # normalized against the usual 1/120 blocks/second fig = plt.figure(figsize=(10,10),facecolor='white') plt.style.use('seaborn-white') plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.4) plt.rcParams['ytick.labelsize'] = 20 plt.rcParams['xtick.labelsize'] = 20 plt.rcParams['font.size'] = 20 plt.hist(b1s_terminal.average_speed.dropna(), bins=np.linspace(0,0.1,30)) plt.xlabel('average mining speed (blocks per second)') plt.ylabel('occurrences') plt.title('') plt.axvline(x=(1/120), c='red', linestyle=':', linewidth=5) plt.legend(('1/120 blocks/second = network usual','average speed of each alt chain')) pass
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Almost all of the alternate chains have an average speed that is SLOWER than the main chain (to the left of red line).Some of the chains clocked in with speeds higher than the network average. Let's see if these are long chains that should have overtaken, or fluke statistics from short chains.
b1s_order = copy(b1s_terminal.sort_values(['average_speed'], ascending=False)) display(b1s_order.dropna()[['alt_chain_ID', 'alt_chain_length', 'alt_chain_time', 'average_speed', 'fraction_of_network_hashrate']])
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
As expected, the chains that clocked in faster than average were all just 2-block detours.Let's take a look without those
b1s_order = copy(b1s_order[b1s_order.alt_chain_length > 2]) display(b1s_order[['alt_chain_ID', 'alt_chain_length', 'alt_chain_time', 'average_speed', 'fraction_of_network_hashrate']])
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Surprising observations:Note that chain with ID 10 was 20 blocks long, and averaged production at a speed that would have required **13% of the total network hashrate.**Chain ID 13 managed 18 blocks at a speed consistent with having **8% of the total network hashrate.** Comparison between chainsNow we look whether an influx of hashrate onto an alt chain corresponds with loss of hashrate on the main chain.We'll stick with looking at times, because taking the reciprocal makes a noisy function noisier
# Create plot fig = plt.figure() plt.scatter(b0s.block_height, b0s.delta_time) plt.scatter(b1s.block_height, b1s.delta_time) #plt.xlim(1580000, 1615000) plt.ylim(0, 1500) plt.title('Matplot scatter plot') plt.show()
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
..... This doesn't line up exactly right. Need to get b0 times during b1 stretches... Summarization of the alt chainsQuickly spitting out some text data so I can cross-reference these alt chains against mining timing on the main chain.
verbose_text_output = 1 if verbose_text_output: for i, x in enumerate(b1s.alt_chain_ID.unique()): try: #print('alt chain #' + str(i) + ' median time: ', + np.median(b1s.block_time[b1s.alt_chain_ID==i])) print('alt chain #' + str(i) + ' median time: ' + time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(np.median(b1s.block_time[b1s.alt_chain_ID==i])))) print('\t... started at height ' + str(min(b1s.block_height[b1s.alt_chain_ID==i])) + '\n') except: pass
alt chain #0 median time: 2018-04-04 23:11:27 ... started at height 1545098 alt chain #1 median time: 2018-04-06 03:24:11 ... started at height 1546000 alt chain #2 median time: 2018-04-10 00:11:39 ... started at height 1547963 alt chain #3 median time: 2018-04-29 15:03:35 ... started at height 1562061 alt chain #4 median time: 2018-05-01 19:00:07 ... started at height 1563626 alt chain #5 median time: 2018-05-07 07:36:20 ... started at height 1565690 alt chain #6 median time: 2018-05-05 20:01:05 ... started at height 1566500 alt chain #7 median time: 2018-05-06 10:06:00 ... started at height 1566908 alt chain #8 median time: 2018-05-08 18:22:59 ... started at height 1568536 alt chain #9 median time: 2018-05-17 01:15:02 ... started at height 1574526 alt chain #10 median time: 2018-05-23 02:55:11 ... started at height 1578847 alt chain #11 median time: 2018-05-25 03:13:18 ... started at height 1580196 alt chain #12 median time: 2018-05-27 07:47:34 ... started at height 1581881 alt chain #13 median time: 2018-05-29 08:53:28 ... started at height 1583266 alt chain #14 median time: 2018-05-29 14:14:48 ... started at height 1583283 alt chain #15 median time: 2018-05-29 12:35:05 ... started at height 1583284 alt chain #16 median time: 2018-05-29 18:22:29 ... started at height 1583285 alt chain #17 median time: 2018-05-30 21:51:04 ... started at height 1584464 alt chain #18 median time: 2018-06-03 21:10:57 ... started at height 1587293 alt chain #19 median time: 2018-06-07 11:25:10 ... started at height 1589926 alt chain #20 median time: 2018-06-07 12:36:02 ... started at height 1589929 alt chain #21 median time: 2018-06-08 10:59:40 ... started at height 1590614 alt chain #22 median time: 2018-06-09 16:08:46 ... started at height 1591490 alt chain #23 median time: 2018-06-10 03:44:43 ... started at height 1591844 alt chain #24 median time: 2018-06-10 11:35:53 ... started at height 1592056 alt chain #25 median time: 2018-06-10 20:11:19 ... started at height 1592321 alt chain #26 median time: 2018-06-10 22:24:19 ... started at height 1592393 alt chain #27 median time: 2018-06-11 12:05:59 ... started at height 1592780 alt chain #28 median time: 2018-06-11 12:08:21 ... started at height 1592780 alt chain #29 median time: 2018-06-12 10:41:18 ... started at height 1593453 alt chain #30 median time: 2018-06-13 01:05:44 ... started at height 1593865 alt chain #31 median time: 2018-06-13 01:07:11 ... started at height 1593865 alt chain #32 median time: 2018-06-13 11:34:31 ... started at height 1594175 alt chain #33 median time: 2018-06-15 06:42:39 ... started at height 1595491 alt chain #34 median time: 2018-06-16 19:10:42 ... started at height 1596549 alt chain #35 median time: 2018-06-17 14:29:53 ... started at height 1597199 alt chain #36 median time: 2018-06-17 15:37:00 ... started at height 1597231 alt chain #37 median time: 2018-06-17 15:37:13 ... started at height 1597232 alt chain #38 median time: 2018-06-18 20:43:18 ... started at height 1598110 alt chain #39 median time: 2018-06-21 12:29:33 ... started at height 1599986 alt chain #40 median time: 2018-06-21 12:30:03 ... started at height 1599986 alt chain #41 median time: 2018-06-21 12:30:41 ... started at height 1599987 alt chain #42 median time: 2018-06-24 00:29:35 ... started at height 1601812 alt chain #43 median time: 2018-06-24 05:21:28 ... started at height 1601963 alt chain #44 median time: 2018-06-24 08:52:19 ... started at height 1602053 alt chain #45 median time: 2018-06-25 13:26:38 ... started at height 1602963 alt chain #46 median time: 2018-06-27 09:15:11 ... started at height 1604268 alt chain #47 median time: 2018-06-27 09:16:17 ... started at height 1604268 alt chain #48 median time: 2018-06-28 19:58:12 ... started at height 1605269 alt chain #49 median time: 2018-06-29 06:02:34 ... started at height 1605586 alt chain #50 median time: 2018-07-02 12:27:47 ... started at height 1607956
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Work in progress. Check back later for more excitement! Ah, here's a bug to fix:NaNs in delta_time get marked as a `merlin_block` which is not true
b0s[b0s.merlin_block==1]
_____no_output_____
MIT
analyses/.ipynb_checkpoints/altchain_temporal_study-checkpoint.ipynb
Gingeropolous/monero_archival_project
Linear Solvers QuickstartThis tutorial illustrates the basic usage and functionality of ProbNum's linear solver. In particular:- Loading a random linear system from ProbNum's `problems.zoo`.- Solving the system with one of ProbNum's linear solvers.- Visualizing the return objects of the solver: These are distributions that describe the values of possible solutions and how probable they are.
import warnings warnings.filterwarnings('ignore') # Make inline plots vector graphics instead of raster graphics %matplotlib inline from IPython.display import set_matplotlib_formats set_matplotlib_formats("pdf", "svg") # Plotting import matplotlib.pyplot as plt from matplotlib.colors import TwoSlopeNorm plt.style.use("../../probnum.mplstyle")
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Linear Systems & Linear SolversConsider a linear system of the form$$A \mathbf{x} = \mathbf{b}$$where $A\in\mathbb{R}^{n\times n}$ is a symmetric positive definite matrix, $\mathbf{b}\in\mathbb{R}^n$ is a vector and $\mathbf{x}\in\mathbb{R}^n$ is the unknown solution of the linear system. Solving such a linear system is arguably one of the most fundamental computations in statistics, machine learning and scientific computation. Many problems can be reduced to the solution of one or many (large-scale) linear systems. Some examples include least-squares regression, kernel methods, second-order optimization, quadratic programming, Kalman filtering, linear differential equations and all Gaussian (process) inference. Here, we will solve such a system using one of ProbNum' *probabilistic linear solvers*. Loading a Test Problem with ProbNum's `problems.zoo` packageWe begin by creating a random linear system. ProbNum lets you quickly generate test problems via its `problem.zoo` package. In particular we generate an $n=25$ dimensional symmetric positive definite random matrix $A$ using `random_spd_matrix` with a predefined eigenspectrum, as well as a normal random vector $\mathbf{b}$.
import numpy as np from probnum.problems.zoo.linalg import random_spd_matrix rng = np.random.default_rng(42) # for reproducibility n = 25 # dimensionality # generate linear system spectrum = 10 * np.linspace(0.5, 1, n) ** 4 A = random_spd_matrix(rng=rng, dim=n, spectrum=spectrum) b = rng.normal(size=(n, 1)) print("Matrix condition: {:.2f}".format(np.linalg.cond(A))) print("Eigenvalues: {}".format(np.linalg.eigvalsh(A)))
Matrix condition: 16.00 Eigenvalues: [ 0.625 0.73585981 0.8608519 1.00112915 1.15788966 1.33237674 1.52587891 1.73972989 1.97530864 2.23403931 2.51739125 2.82687905 3.1640625 3.53054659 3.92798153 4.35806274 4.82253086 5.32317173 5.86181641 6.44034115 7.06066744 7.72476196 8.43463662 9.19234853 10. ]
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Now we visualize the linear system.
# Plot linear system fig, axes = plt.subplots( nrows=1, ncols=4, figsize=(5, 3.5), sharey=True, squeeze=False, gridspec_kw={"width_ratios": [4, 0.25, 0.25, 0.25]}, ) vmax = np.max(np.hstack([A, b])) vmin = np.min(np.hstack([A, b])) # normalize diverging colobar, such that it is centered at zero norm = TwoSlopeNorm(vmin=vmin, vcenter=0, vmax=vmax) axes[0, 0].imshow(A, cmap="bwr", norm=norm) axes[0, 0].set_title("$A$", fontsize=24) axes[0, 1].text(0.5, A.shape[0] / 2, "$\\bm{x}$", va="center", ha="center", fontsize=32) axes[0, 1].axis("off") axes[0, 2].text(0.5, A.shape[0] / 2, "$=$", va="center", ha="center", fontsize=32) axes[0, 2].axis("off") axes[0, 3].imshow(b, cmap="bwr", norm=norm) axes[0, 3].set_title("$\\bm{b}$", fontsize=24) for ax in axes[0, :]: ax.set_xticks([]) ax.set_yticks([]) plt.tight_layout() plt.show()
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Solve the Linear System with ProbNum's SolverWe now use ProbNum's probabilistic linear solver `problinsolve` to estimate the solution vector $\mathbf{x}$.The algorithm iteratively chooses *actions* $\mathbf{s}$ and makes linear *observations* $\mathbf{y}=A \mathbf{s}$ to update its belief over the solution $\mathbf{x}$, the matrix $A$ and its inverse $H:=A^{-1}$. We do not run the solver to convergence here (which would require 25 iterations and yield an exact solution) but only run it for `maxiter`=10 iterations.
from probnum.linalg import problinsolve # Solve with probabilistic linear solver x, Ahat, Ainv, info = problinsolve(A=A, b=b, maxiter=10) print(info)
{'iter': 10, 'maxiter': 10, 'resid_l2norm': 0.022193410186189838, 'trace_sol_cov': 27.593259516810043, 'conv_crit': 'maxiter', 'rel_cond': None}
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Visualization of Return Objects & Estimated Uncertainty The solver returns random variables $\mathsf{x}$, $\mathsf{A}$, and $\mathsf{H}:=\mathsf{A}^{-1}$ (described by distributions) which are called `x`, `Ahat` and `Ainv` respectively in the cell above. Those distributions describe possible values of the solution $\mathbf{x}$ of the linear system, the matrix $A$ and its inverse $H=A^{-1}$ respectively and how probable they are being considered by the solver. In other words, the distributions describe the uncertainty the solvers still has about the value of their respective quantity. Below we visualize those distributions. For all of them, the mean estimator $\mathbb{E}(\cdot)$ of the random variable can be used as a "best guess" for the true value of each quantity. In fact, the mean estimate of $\hat{x}:=\mathbb{E}(\mathsf{x})$ in this particular tutorial coincides with the conjugate gradient solution (see e.g., [1, 2]). Solution $\mathsf{x}$The solution object `x` is a normal distribution of size of the linear system (in this case $n=25$) which quantifies the uncertainty over the solution. The mean $\mathbb{E}(\mathsf{x})$ of the random variable is the "best guess" for $\mathbf{x}$. The covariance matrix (of which we only print the first row below) quantifies possible deviation of the solution from the mean. We can also sample from the normal distribution to obtain possible solution vectors $\mathsf{x}_1$, $\mathsf{x}_2$, ...
x x.mean x.cov.todense()[0, :] n_samples = 10 x_samples = x.sample(rng=rng, size=n_samples) x_samples.shape
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Furthermore, the standard deviations together with the mean $\mathbb{E}(\mathsf{x})$ yield credible intervals for each dimension (entry) of $\mathbf{x}$. Credible intervals are a quick way to visualize the numerical uncertainty, but keep in mind that they only consider marginal (per element) distributions of $\mathsf{x}$ and do not capture correlations between the entries of $\mathsf{x}$.In the plot below, the samples are drawn from the joint distribution of $\mathsf{x}$ taking into account cross correlations between the entries. The error bars only show the marginal credible intervals.
plt.figure() plt.plot(x_samples[0, :], '.', color='gray', label='sample') plt.plot(x_samples[1:, :].T, '.', color='gray') plt.errorbar(np.arange(0, 25), x.mean, 1 * x.std, ls='none', label='68\% credible interval') plt.plot(x.mean, 'o', color='C0', label='$\mathbb{E}(\mathsf{x})$') plt.xlabel("index of element in $\mathsf{x}$") plt.ylabel("value of element in $\mathsf{x}$") plt.legend() plt.show()
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Here are the credible intervals printed out:
x_true = np.linalg.solve(A, b)[:, 0] abs_err = abs(x.mean - x_true) rel_err = abs_err / abs(x.mean + x_true) print(f"Maximal absolute and relative error to mean estimate: {max(abs_err):.2e}, {max(rel_err):.2e}") print(f"68% marginal credible intervals of the entries of x") for i in range(25): print(f"element {i : >2}: {x.mean[i]: 0.2f} pm {1 * x.std[i]:.2f}")
Maximal absolute and relative error to mean estimate: 4.84e-03, 9.16e-02 68% marginal credible intervals of the entries of x element 0: 0.43 pm 1.14 element 1: 0.16 pm 0.91 element 2: -0.96 pm 0.87 element 3: 0.33 pm 0.85 element 4: -0.01 pm 0.86 element 5: -0.78 pm 0.88 element 6: 0.72 pm 1.14 element 7: 0.13 pm 1.11 element 8: 0.23 pm 0.95 element 9: -0.16 pm 1.15 element 10: 0.55 pm 1.04 element 11: -0.59 pm 1.09 element 12: 0.31 pm 1.21 element 13: 0.04 pm 1.11 element 14: 0.78 pm 1.18 element 15: 0.28 pm 1.16 element 16: -1.30 pm 1.21 element 17: -0.30 pm 1.08 element 18: 0.13 pm 0.97 element 19: -0.22 pm 1.11 element 20: 0.95 pm 0.77 element 21: -1.15 pm 1.11 element 22: 0.42 pm 1.19 element 23: 0.09 pm 0.98 element 24: -1.22 pm 1.01
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
Generally, the uncertainty is a conservative estimate of the error, especially for small $n$, hence the credible intervals above as well as in the plot are quiet large. For large $n$ where uncertainty quantification matters more, the error bars are expected to fit the true uncertainty better. System Matrix $\mathsf{A}$ and its Inverse $\mathsf{H}$For completeness, we also illustrate the random variables $\mathsf{A}$ and $\mathsf{H}$. These quantities are not needed to describe the solution $\mathbf{x}$ and can usually be disregarded. However, in certain cases it may be of interest to acquire an approximate representation of $A$ or $H$, especially when it is infeasible to work with $A$ or $H$ directly due to their dimensionality. Indeed, it might sound confusing at first that the solver even constructs a belief about the matrix $A$ that initially defined the linear system, but keep in mind that linear solvers generally do not have nor require access to $A$ but only to a function handle of the matrix-vector product $\mathcal{A}(\mathbf{s}):=A\mathbf{s}$, $\mathcal{A}: \mathbb{R}^n \rightarrow \mathbb{R}^n$. The matrix might for all practical purposes be unknown (in large liner systems it is often impossible to even construct it in memory).Both return objects `A` and `Ainv` are matrix-valued normal distributions describing the random variables $\mathsf{A}$ and $\mathsf{H}$.We plot the mean $\mathbb{E}(\mathsf{A})$ of $\mathsf{A}$, two samples $\mathsf{A}_1$ and $\mathsf{A}_2$ as well as the ground truth $A$; analogously for $\mathsf{H}$ below. The mean $\mathbb{E}(\mathsf{A})$ can be used as estimate for $A$, same for $\mathbb{E}(\mathsf{H})$ and $H$.
Ahat Ainv # Draw samples rng = np.random.default_rng(seed=42) Ahat_samples = Ahat.sample(rng=rng, size=3) Ainv_samples = Ainv.sample(rng=rng, size=3) vmax = np.max(np.hstack([A, b])) vmin = np.min(np.hstack([A, b])) # normalize diverging colobar, such that it is centered at zero norm = TwoSlopeNorm(vmin=vmin, vcenter=0, vmax=vmax) # Plot A rvdict = { "$A$": A, "$\mathbb{E}(\mathsf{A})$": Ahat.mean.todense(), "$\mathsf{A}_1$": Ahat_samples[0], "$\mathsf{A}_2$": Ahat_samples[1], } fig, axes = plt.subplots(nrows=1, ncols=len(rvdict), figsize=(10, 3), sharey=True) for i, (title, rv) in enumerate(rvdict.items()): axes[i].imshow(rv, cmap="bwr", norm=norm) axes[i].set_axis_off() axes[i].title.set_text(title) plt.tight_layout() H = np.linalg.inv(A) vmax = np.max(H) vmin = np.min(H) norm = TwoSlopeNorm(vmin=vmin, vcenter=0, vmax=vmax) # Plot H rvdict = { "$H$": H, "$\mathbb{E}(\mathsf{H})$": Ainv.mean.todense(), "$\mathsf{H}_1$": Ainv_samples[0], "$\mathsf{H}_2$": Ainv_samples[1], } fig, axes = plt.subplots(nrows=1, ncols=len(rvdict), figsize=(10, 3), sharey=True) for i, (title, rv) in enumerate(rvdict.items()): axes[i].imshow(rv, cmap="bwr", norm=norm) axes[i].set_axis_off() axes[i].title.set_text(title) plt.tight_layout()
_____no_output_____
MIT
docs/source/tutorials/linalg/linear_solvers_quickstart.ipynb
tskarvone/probnum
ASSIGNMENT 3Using Tensorflow to build a CNN network for CIFAR-10 dataset. Each record is of size 1*3072. Building a CNN network to classify the data into the 10 classes. DatasetCIFAR-10 dataset The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.http://www.cs.utoronto.ca/~kriz/cifar.html Installing pydrive
!pip install pydrive
Collecting pydrive [?25l Downloading https://files.pythonhosted.org/packages/52/e0/0e64788e5dd58ce2d6934549676243dc69d982f198524be9b99e9c2a4fd5/PyDrive-1.3.1.tar.gz (987kB)  1% |▎ | 10kB 15.9MB/s eta 0:00:01  2% |▋ | 20kB 4.9MB/s eta 0:00:01  3% |█ | 30kB 7.0MB/s eta 0:00:01  4% |█▎ | 40kB 4.4MB/s eta 0:00:01  5% |█▋ | 51kB 5.4MB/s eta 0:00:01  6% |██ | 61kB 6.4MB/s eta 0:00:01  7% |██▎ | 71kB 7.2MB/s eta 0:00:01  8% |██▋ | 81kB 8.0MB/s eta 0:00:01  9% |███ | 92kB 8.8MB/s eta 0:00:01  10% |███▎ | 102kB 7.1MB/s eta 0:00:01  11% |███▋ | 112kB 7.3MB/s eta 0:00:01  12% |████ | 122kB 9.5MB/s eta 0:00:01  13% |████▎ | 133kB 9.5MB/s eta 0:00:01  14% |████▋ | 143kB 16.9MB/s eta 0:00:01  15% |█████ | 153kB 17.1MB/s eta 0:00:01  16% |█████▎ | 163kB 17.0MB/s eta 0:00:01  17% |█████▋ | 174kB 16.7MB/s eta 0:00:01  18% |██████ | 184kB 17.1MB/s eta 0:00:01  19% |██████▎ | 194kB 17.1MB/s eta 0:00:01  20% |██████▋ | 204kB 40.7MB/s eta 0:00:01  21% |███████ | 215kB 20.9MB/s eta 0:00:01  22% |███████▎ | 225kB 20.7MB/s eta 0:00:01  23% |███████▋ | 235kB 20.4MB/s eta 0:00:01  24% |████████ | 245kB 20.2MB/s eta 0:00:01  25% |████████▎ | 256kB 20.3MB/s eta 0:00:01  26% |████████▋ | 266kB 19.7MB/s eta 0:00:01  27% |█████████ | 276kB 20.5MB/s eta 0:00:01  29% |█████████▎ | 286kB 20.6MB/s eta 0:00:01  30% |█████████▋ | 296kB 20.5MB/s eta 0:00:01  31% |██████████ | 307kB 21.9MB/s eta 0:00:01  32% |██████████▎ | 317kB 46.6MB/s eta 0:00:01  33% |██████████▋ | 327kB 48.1MB/s eta 0:00:01  34% |███████████ | 337kB 52.4MB/s eta 0:00:01  35% |███████████▎ | 348kB 49.0MB/s eta 0:00:01  36% |███████████▋ | 358kB 48.5MB/s eta 0:00:01  37% |████████████ | 368kB 51.2MB/s eta 0:00:01  38% |████████████▎ | 378kB 51.1MB/s eta 0:00:01  39% |████████████▋ | 389kB 51.3MB/s eta 0:00:01  40% |█████████████ | 399kB 30.2MB/s eta 0:00:01  41% |█████████████▎ | 409kB 29.4MB/s eta 0:00:01  42% |█████████████▋ | 419kB 29.1MB/s eta 0:00:01  43% |██████████████ | 430kB 28.6MB/s eta 0:00:01  44% |██████████████▎ | 440kB 28.4MB/s eta 0:00:01  45% |██████████████▋ | 450kB 27.5MB/s eta 0:00:01  46% |███████████████ | 460kB 27.5MB/s eta 0:00:01  47% |███████████████▎ | 471kB 27.9MB/s eta 0:00:01  48% |███████████████▋ | 481kB 27.7MB/s eta 0:00:01  49% |████████████████ | 491kB 27.6MB/s eta 0:00:01  50% |████████████████▎ | 501kB 44.8MB/s eta 0:00:01  51% |████████████████▋ | 512kB 39.1MB/s eta 0:00:01  52% |█████████████████ | 522kB 40.0MB/s eta 0:00:01  53% |█████████████████▎ | 532kB 41.6MB/s eta 0:00:01  54% |█████████████████▋ | 542kB 41.8MB/s eta 0:00:01  55% |██████████████████ | 552kB 48.3MB/s eta 0:00:01  57% |██████████████████▎ | 563kB 48.9MB/s eta 0:00:01  58% |██████████████████▋ | 573kB 48.4MB/s eta 0:00:01  59% |███████████████████ | 583kB 49.1MB/s eta 0:00:01  60% |███████████████████▎ | 593kB 49.5MB/s eta 0:00:01  61% |███████████████████▋ | 604kB 49.4MB/s eta 0:00:01  62% |████████████████████ | 614kB 61.6MB/s eta 0:00:01  63% |████████████████████▎ | 624kB 62.2MB/s eta 0:00:01  64% |████████████████████▋ | 634kB 62.4MB/s eta 0:00:01  65% |█████████████████████ | 645kB 59.9MB/s eta 0:00:01  66% |█████████████████████▎ | 655kB 55.2MB/s eta 0:00:01  67% |█████████████████████▋ | 665kB 50.8MB/s eta 0:00:01  68% |██████████████████████ | 675kB 49.8MB/s eta 0:00:01  69% |██████████████████████▎ | 686kB 50.1MB/s eta 0:00:01  70% |██████████████████████▋ | 696kB 50.2MB/s eta 0:00:01  71% |███████████████████████ | 706kB 49.6MB/s eta 0:00:01  72% |███████████████████████▎ | 716kB 50.1MB/s eta 0:00:01  73% |███████████████████████▋ | 727kB 50.3MB/s eta 0:00:01  74% |████████████████████████ | 737kB 49.5MB/s eta 0:00:01  75% |████████████████████████▎ | 747kB 52.0MB/s eta 0:00:01  76% |████████████████████████▋ | 757kB 56.1MB/s eta 0:00:01  77% |████████████████████████▉ | 768kB 61.6MB/s eta 0:00:01  78% |█████████████████████████▏ | 778kB 64.1MB/s eta 0:00:01  79% |█████████████████████████▌ | 788kB 63.8MB/s eta 0:00:01  80% |█████████████████████████▉ | 798kB 64.2MB/s eta 0:00:01  81% |██████████████████████████▏ | 808kB 64.1MB/s eta 0:00:01  82% |██████████████████████████▌ | 819kB 63.6MB/s eta 0:00:01  83% |██████████████████████████▉ | 829kB 63.7MB/s eta 0:00:01  85% |███████████████████████████▏ | 839kB 65.1MB/s eta 0:00:01  86% |███████████████████████████▌ | 849kB 65.2MB/s eta 0:00:01  87% |███████████████████████████▉ | 860kB 51.4MB/s eta 0:00:01  88% |████████████████████████████▏ | 870kB 48.5MB/s eta 0:00:01  89% |████████████████████████████▌ | 880kB 48.2MB/s eta 0:00:01  90% |████████████████████████████▉ | 890kB 47.9MB/s eta 0:00:01  91% |█████████████████████████████▏ | 901kB 46.6MB/s eta 0:00:01  92% |█████████████████████████████▌ | 911kB 47.0MB/s eta 0:00:01  93% |█████████████████████████████▉ | 921kB 46.3MB/s eta 0:00:01  94% |██████████████████████████████▏ | 931kB 46.3MB/s eta 0:00:01  95% |██████████████████████████████▌ | 942kB 46.1MB/s eta 0:00:01  96% |██████████████████████████████▉ | 952kB 45.6MB/s eta 0:00:01  97% |███████████████████████████████▏| 962kB 55.4MB/s eta 0:00:01  98% |███████████████████████████████▌| 972kB 58.7MB/s eta 0:00:01  99% |███████████████████████████████▉| 983kB 58.2MB/s eta 0:00:01  100% |████████████████████████████████| 993kB 19.8MB/s [?25hRequirement already satisfied: google-api-python-client>=1.2 in /usr/local/lib/python3.6/dist-packages (from pydrive) (1.6.7) Requirement already satisfied: oauth2client>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pydrive) (4.1.3) Requirement already satisfied: PyYAML>=3.0 in /usr/local/lib/python3.6/dist-packages (from pydrive) (3.13) Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (3.0.0) Requirement already satisfied: six<2dev,>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (1.11.0) Requirement already satisfied: httplib2<1dev,>=0.9.2 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client>=1.2->pydrive) (0.11.3) Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.2.4) Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (0.4.5) Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client>=4.0.0->pydrive) (4.0) Building wheels for collected packages: pydrive Building wheel for pydrive (setup.py) ... [?25ldone [?25h Stored in directory: /root/.cache/pip/wheels/fa/d2/9a/d3b6b506c2da98289e5d417215ce34b696db856643bad779f4 Successfully built pydrive Installing collected packages: pydrive Successfully installed pydrive-1.3.1
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Creates connection
from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth import tensorflow as tf from oauth2client.client import GoogleCredentials
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Authenticating and creating the PyDrive client
auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth)
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Getting ids of all the files in folder
file_list = drive.ListFile({'q': "'1DCFFw2O6BFq8Gk0eYu7JT4Qn224BNoCt' in parents and trashed=false"}).GetList() for file1 in file_list: print('title: %s, id: %s' % (file1['title'], file1['id']))
title: data_batch_1, id: 11Bo2ULl9_aOQ761ONc2vhepnydriELiT title: data_batch_2, id: 1asFrGiOMdHKY-_KO94e1fLWMBN_Ke92I title: test_batch, id: 1Wyz_RdmoLe9r9t1rloap8AttSltmfwrp title: data_batch_3, id: 11ky6i6FSTGWJYOzXquELD4H-GUr49C4f title: data_batch_5, id: 1rmRytfjJWua0cv17DzST6PqoDFY2APa6 title: data_batch_4, id: 1bb6TRjqNY5A0FsD_P7s3ssepMGWNW-Eh
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Importing libraries
import tensorflow as tf import numpy as np import pandas as pd import matplotlib.pyplot as plt from IPython import display from sklearn.model_selection import train_test_split import pickle %matplotlib inline
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Loading the data
def unpickle(file): with open(file, 'rb') as fo: dict = pickle.load(fo, encoding='bytes') return dict
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
if file is zipped
zip_file = drive.CreateFile({'id': '11Bo2ULl9_aOQ761ONc2vhepnydriELiT'}) zip_file.GetContentFile('data_batch_1') zip_file = drive.CreateFile({'id': '1asFrGiOMdHKY-_KO94e1fLWMBN_Ke92I'}) zip_file.GetContentFile('data_batch_2') zip_file = drive.CreateFile({'id': '11ky6i6FSTGWJYOzXquELD4H-GUr49C4f'}) zip_file.GetContentFile('data_batch_3') zip_file = drive.CreateFile({'id': '1bb6TRjqNY5A0FsD_P7s3ssepMGWNW-Eh'}) zip_file.GetContentFile('data_batch_4') zip_file = drive.CreateFile({'id': '1rmRytfjJWua0cv17DzST6PqoDFY2APa6'}) zip_file.GetContentFile('data_batch_5') zip_file = drive.CreateFile({'id': '1Wyz_RdmoLe9r9t1rloap8AttSltmfwrp'}) zip_file.GetContentFile('test_batch') data1 = unpickle("data_batch_1") data2 = unpickle("data_batch_2") data3 = unpickle("data_batch_3") data4 = unpickle("data_batch_4") data5 = unpickle("data_batch_5") #label_data = unpickle('../input/batches.meta')[b'label_names'] labels1 = data1[b'labels'] data1 = data1[b'data'] * 1.0 labels2 = data2[b'labels'] data2 = data2[b'data'] * 1.0 labels3 = data3[b'labels'] data3 = data3[b'data'] * 1.0 labels4 = data4[b'labels'] data4 = data4[b'data'] * 1.0 labels5 = data5[b'labels'] data5 = data5[b'data'] * 1.0
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Combine the remaining four arrays to use as training data
X_tr = np.concatenate([data1, data2, data3, data4, data5], axis=0) X_tr = np.dstack((X_tr[:, :1024], X_tr[:, 1024:2048], X_tr[:, 2048:])) / 1.0 X_tr = (X_tr - 128) / 255.0 X_tr = X_tr.reshape(-1, 32, 32, 3) y_tr = np.concatenate([labels1, labels2, labels3, labels4, labels5], axis=0)
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Setting the number of classes
num_classes = len(np.unique(y_tr)) print("X_tr", X_tr.shape) print("y_tr", y_tr.shape)
X_tr (50000, 32, 32, 3) y_tr (50000,)
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Importing the test data
test_data = unpickle("test_batch") X_test = test_data[b'data'] X_test = np.dstack((X_test[:, :1024], X_test[:, 1024:2048], X_test[:, 2048:])) / 1.0 X_test = (X_test - 128) / 255.0 X_test = X_test.reshape(-1, 32, 32, 3) y_test = np.asarray(test_data[b'labels'])
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Spliting into test and validation
X_te, X_cv, y_te, y_cv = train_test_split(X_test, y_test, test_size=0.5, random_state=1) print("X_te", X_te.shape) print("X_cv", X_cv.shape) print("y_te", y_te.shape) print("y_cv", y_cv.shape)
X_te (5000, 32, 32, 3) X_cv (5000, 32, 32, 3) y_te (5000,) y_cv (5000,)
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Batch generator
def get_batches(X, y, batch_size, crop=False, distort=True): # Shuffle X,y shuffled_idx = np.arange(len(y)) np.random.shuffle(shuffled_idx) i, h, w, c = X.shape # Enumerate indexes by steps of batch_size for i in range(0, len(y), batch_size): batch_idx = shuffled_idx[i:i+batch_size] X_return = X[batch_idx] # optional random crop of images if crop: woff = (w - 24) // 4 hoff = (h - 24) // 4 startw = np.random.randint(low=woff,high=woff*2) starth = np.random.randint(low=hoff,high=hoff*2) X_return = X_return[:,startw:startw+24,starth:starth+24,:] # do random flipping of images coin = np.random.binomial(1, 0.5, size=None) if coin and distort: X_return = X_return[...,::-1,:] yield X_return, y[batch_idx]
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Configurations
epochs = 20 # how many epochs batch_size = 128 steps_per_epoch = X_tr.shape[0] / batch_size
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Building the network MODEL 7.13.4.6.7fModel description:* 7.6 - changed kernel reg rate to 0.01 from 0.1* 7.7 - optimize loss instead of ce 7.8 - remove redundant lambda, replaced scale in regularizer with lambda, changed lambda from 0.01 to 0.001* 7.9 - lambda 0 instead of 3* 7.9.1 - lambda 1 instead of 0* 7.9.2 - use lambda 2 instead of 1* 7.9.4f - use 3x3 pooling instead of 2x2* 7.11.6f - add batch norm after conv 5* 7.11.2f - raise lambda, add dropout after fc2* 7.12.2f - change fully connected dropout to 20%* 7.12.2.2g - change fc dropout to 25%, increase filters in last 2 conv layers to 192 from 128* 7.13.2.2f - change all pool sizes to 2x2 from 3x37.13.3.6f - use different lambda for conv + fc layers
# Create new graph graph = tf.Graph() # whether to retrain model from scratch or use saved model init = True model_name = "model_7.13.4.7.7l" with graph.as_default(): # Placeholders X = tf.placeholder(dtype=tf.float32, shape=[None, 32, 32, 3]) y = tf.placeholder(dtype=tf.int32, shape=[None]) training = tf.placeholder(dtype=tf.bool) # create global step for decaying learning rate global_step = tf.Variable(0, trainable=False) # lambda 6 lamC = 0.000050 lamF = 0.0025000 # learning rate j epochs_per_decay = 10 starting_rate = 0.003 decay_factor = 0.9 staircase = True learning_rate = tf.train.exponential_decay(starting_rate, # start at 0.003 global_step, steps_per_epoch * epochs_per_decay, # 100 epochs decay_factor, # 0.5 decrease staircase=staircase) # Small epsilon value for the BN transform epsilon = 1e-3 with tf.name_scope('conv1') as scope: # Convolutional layer 1 conv1 = tf.layers.conv2d( X, # Input data filters=64, # 64 filters kernel_size=(5, 5), # Kernel size: 5x5 strides=(1, 1), # Stride: 1 padding='SAME', # "same" padding activation=None, # None kernel_initializer=tf.truncated_normal_initializer(stddev=5e-2, seed=10), kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=lamC), name='conv1' ) # try batch normalization bn1 = tf.layers.batch_normalization( conv1, axis=-1, momentum=0.99, epsilon=epsilon, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), training=training, name='bn1' ) #apply relu conv1_bn_relu = tf.nn.relu(bn1, name='relu1') conv1_bn_relu = tf.layers.dropout(conv1_bn_relu, rate=0.1, seed=9, training=training) with tf.name_scope('conv2') as scope: # Convolutional layer 2 conv2 = tf.layers.conv2d( conv1_bn_relu, # Input data filters=64, # 64 filters kernel_size=(5, 5), # Kernel size: 5x5 strides=(1, 1), # Stride: 1 padding='SAME', # "same" padding activation=None, # None kernel_initializer=tf.truncated_normal_initializer(stddev=5e-2, seed=8), kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=lamC), name='conv2' # Add name ) # try batch normalization bn2 = tf.layers.batch_normalization( conv2, axis=-1, momentum=0.9, epsilon=epsilon, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), training=training, name='bn2' ) #apply relu conv2_bn_relu = tf.nn.relu(bn2, name='relu2') with tf.name_scope('pool1') as scope: # Max pooling layer 1 pool1 = tf.layers.max_pooling2d( conv2_bn_relu, # Input pool_size=(2, 2), # Pool size: 3x3 strides=(2, 2), # Stride: 2 padding='SAME', # "same" padding name='pool1' ) # dropout at 10% pool1 = tf.layers.dropout(pool1, rate=0.1, seed=1, training=training) with tf.name_scope('conv3') as scope: # Convolutional layer 3 conv3= tf.layers.conv2d( pool1, # Input filters=96, # 96 filters kernel_size=(4, 4), # Kernel size: 4x4 strides=(1, 1), # Stride: 1 padding='SAME', # "same" padding activation=None, # None kernel_initializer=tf.truncated_normal_initializer(stddev=5e-2, seed=7), kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=lamC), name='conv3' ) bn3 = tf.layers.batch_normalization( conv3, axis=-1, momentum=0.9, epsilon=epsilon, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), training=training, name='bn3' ) #apply relu conv3_bn_relu = tf.nn.relu(bn3, name='relu3') # dropout at 10% conv3_bn_relu = tf.layers.dropout(conv3_bn_relu, rate=0.1, seed=0, training=training) with tf.name_scope('conv4') as scope: # Convolutional layer 4 conv4= tf.layers.conv2d( conv3_bn_relu, # Input filters=96, # 96 filters kernel_size=(4, 4), # Kernel size: 4x4 strides=(1, 1), # Stride: 1 padding='SAME', # "same" padding activation=None, kernel_initializer=tf.truncated_normal_initializer(stddev=5e-2, seed=1), kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=lamC), name='conv4' ) bn4 = tf.layers.batch_normalization( conv4, axis=-1, momentum=0.9, epsilon=epsilon, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), training=training, name='bn4' ) #apply relu conv4_bn_relu = tf.nn.relu(bn4, name='relu4') # Max pooling layer 2 pool2 = tf.layers.max_pooling2d( conv4_bn_relu, # input pool_size=(2, 2), # pool size 2x2 strides=(2, 2), # stride 2 padding='SAME', name='pool2' ) with tf.name_scope('conv5') as scope: # Convolutional layer 5 conv5= tf.layers.conv2d( pool2, # Input filters=128, # 128 filters kernel_size=(3, 3), # Kernel size: 3x3 strides=(1, 1), # Stride: 1 padding='SAME', # "same" padding activation=None, kernel_initializer=tf.truncated_normal_initializer(stddev=5e-2, seed=2), kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=lamC), name='conv5' ) bn5 = tf.layers.batch_normalization( conv5, axis=-1, momentum=0.9, epsilon=epsilon, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), training=training, name='bn5' ) # activation conv5_bn_relu = tf.nn.relu(bn5, name='relu5') # try dropout here conv5_bn_relu = tf.layers.dropout(conv5_bn_relu, rate=0.1, seed=3, training=training) with tf.name_scope('conv6') as scope: # Convolutional layer 6 conv6= tf.layers.conv2d( conv5_bn_relu, # Input filters=128, # 128 filters kernel_size=(3, 3), # Kernel size: 3x3 strides=(1, 1), # Stride: 1 padding='SAME', # "same" padding activation=None, # None kernel_initializer=tf.truncated_normal_initializer(stddev=5e-2, seed=3), kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=lamC), name='conv6' ) bn6 = tf.layers.batch_normalization( conv6, axis=-1, momentum=0.9, epsilon=epsilon, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), training=training, name='bn6' ) #apply relu conv6_bn_relu = tf.nn.relu(bn6, name='relu6') # Max pooling layer 3 pool3 = tf.layers.max_pooling2d( conv6_bn_relu, # input pool_size=(2, 2), # pool size 2x2 strides=(2, 2), # stride 2 padding='SAME', name='pool3' ) with tf.name_scope('flatten') as scope: # Flatten output flat_output = tf.contrib.layers.flatten(pool3) # dropout at 10% flat_output = tf.layers.dropout(flat_output, rate=0.1, seed=5, training=training) # Fully connected layer 1 with tf.name_scope('fc1') as scope: fc1 = tf.layers.dense( flat_output, # input 1024, # 1024 hidden units activation=None, # None kernel_initializer=tf.variance_scaling_initializer(scale=2, seed=4), bias_initializer=tf.zeros_initializer(), kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=lamF), name="fc1" ) bn7 = tf.layers.batch_normalization( fc1, axis=-1, momentum=0.9, epsilon=epsilon, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), training=training, name='bn7' ) fc1_relu = tf.nn.relu(bn7, name='fc1_relu') # dropout at 25% fc1_do = tf.layers.dropout(fc1_relu, rate=0.25, seed=10, training=training) # Fully connected layer 2 with tf.name_scope('fc2') as scope: fc2 = tf.layers.dense( fc1_do, # input 512, # 512 hidden units activation=None, # None kernel_initializer=tf.variance_scaling_initializer(scale=2, seed=5), bias_initializer=tf.zeros_initializer(), kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=lamF), name="fc2" ) bn8 = tf.layers.batch_normalization( fc2, axis=-1, momentum=0.9, epsilon=epsilon, center=True, scale=True, beta_initializer=tf.zeros_initializer(), gamma_initializer=tf.ones_initializer(), moving_mean_initializer=tf.zeros_initializer(), moving_variance_initializer=tf.ones_initializer(), training=training, name='bn8' ) fc2_relu = tf.nn.relu(bn8, name='fc2_relu') # dropout at 10% fc2_do = tf.layers.dropout(fc2_relu, rate=0.25, seed=11, training=training) # Output layer logits = tf.layers.dense( fc2_do, # input num_classes, # One output unit per category activation=None, # No activation function kernel_initializer=tf.contrib.layers.xavier_initializer(uniform=True, seed=6,dtype=tf.dtypes.float32), ) # Kernel weights of the 1st conv. layer with tf.variable_scope('conv1', reuse=True): conv_kernels1 = tf.get_variable('kernel') # Mean cross-entropy mean_ce = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)) loss = mean_ce + tf.losses.get_regularization_loss() # Adam optimizer optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) # Minimize cross-entropy train_op = optimizer.minimize(loss, global_step=global_step) # Compute predictions and accuracy predictions = tf.argmax(logits, axis=1, output_type=tf.int32) is_correct = tf.equal(y, predictions) accuracy = tf.reduce_mean(tf.cast(is_correct, dtype=tf.float32)) # add this so that the batch norm gets run extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # Create summary hooks tf.summary.scalar('accuracy', accuracy) tf.summary.scalar('cross_entropy', mean_ce) tf.summary.scalar('learning_rate', learning_rate) # Merge all the summaries and write them out to /tmp/mnist_logs (by default) merged = tf.summary.merge_all()
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
CONFIGURE OPTIONS
init = True # whether to initialize the model or use a saved version crop = False # do random cropping of images? meta_data_every = 5 log_to_tensorboard = False print_every = 1 # how often to print metrics checkpoint_every = 1 # how often to save model in epochs use_gpu = True # whether or not to use the GPU print_metrics = True # whether to print or plot metrics, if False a plot will be created and updated every epoch # Placeholders for metrics if init: valid_acc_values = [] valid_cost_values = [] train_acc_values = [] train_cost_values = [] train_lr_values = [] train_loss_values = [] config = tf.ConfigProto()
_____no_output_____
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Trainig the model
with tf.Session(graph=graph, config=config) as sess: if log_to_tensorboard: train_writer = tf.summary.FileWriter('./logs/tr_' + model_name, sess.graph) test_writer = tf.summary.FileWriter('./logs/te_' + model_name) if not print_metrics: # create a plot to be updated as model is trained f, ax = plt.subplots(1,3,figsize=(20,5)) # create the saver saver = tf.train.Saver() # If the model is new initialize variables, else restore the session if init: sess.run(tf.global_variables_initializer()) else: saver.restore(sess, './model/cifar_'+model_name+'.ckpt') # Set seed np.random.seed(0) print("Training", model_name, "...") # Train several epochs for epoch in range(epochs): # Accuracy values (train) after each batch batch_acc = [] batch_cost = [] batch_loss = [] batch_lr = [] # only log run metadata once per epoch write_meta_data = False for X_batch, y_batch in get_batches(X_tr, y_tr, batch_size, crop=crop, distort=True): if write_meta_data and log_to_tensboard: # create the metadata run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() # Run training and evaluate accuracy _, _, summary, acc_value, cost_value, loss_value, step, lr = sess.run([train_op, extra_update_ops, merged, accuracy, mean_ce, loss, global_step, learning_rate], feed_dict={ X: X_batch, y: y_batch, training: True }, options=run_options, run_metadata=run_metadata) # Save accuracy (current batch) batch_acc.append(acc_value) batch_cost.append(cost_value) batch_lr.append(lr) batch_loss.append(loss_value) # write the summary train_writer.add_run_metadata(run_metadata, 'step %d' % step) train_writer.add_summary(summary, step) write_meta_data = False else: # Run training without meta data _, _, summary, acc_value, cost_value, loss_value, step, lr = sess.run([train_op, extra_update_ops, merged, accuracy, mean_ce, loss, global_step, learning_rate], feed_dict={ X: X_batch, y: y_batch, training: True }) # Save accuracy (current batch) batch_acc.append(acc_value) batch_cost.append(cost_value) batch_lr.append(lr) batch_loss.append(loss_value) # write the summary if log_to_tensorboard: train_writer.add_summary(summary, step) # save checkpoint every nth epoch if(epoch % checkpoint_every == 0): print("Saving checkpoint") # save the model save_path = saver.save(sess, './model/cifar_'+model_name+'.ckpt') # Now that model is saved set init to false so we reload it init = False # init batch arrays batch_cv_acc = [] batch_cv_cost = [] batch_cv_loss = [] # Evaluate validation accuracy with batches so as to not crash the GPU for X_batch, y_batch in get_batches(X_cv, y_cv, batch_size, crop=crop, distort=False): summary, valid_acc, valid_cost, valid_loss = sess.run([merged, accuracy, mean_ce, loss], feed_dict={ X: X_batch, y: y_batch, training: False }) batch_cv_acc.append(valid_acc) batch_cv_cost.append(valid_cost) batch_cv_loss.append(valid_loss) # Write average of validation data to summary logs if log_to_tensorboard: summary = tf.Summary(value=[tf.Summary.Value(tag="accuracy", simple_value=np.mean(batch_cv_acc)),tf.Summary.Value(tag="cross_entropy", simple_value=np.mean(batch_cv_cost)),]) test_writer.add_summary(summary, step) step += 1 # take the mean of the values to add to the metrics valid_acc_values.append(np.mean(batch_cv_acc)) valid_cost_values.append(np.mean(batch_cv_cost)) train_acc_values.append(np.mean(batch_acc)) train_cost_values.append(np.mean(batch_cost)) train_lr_values.append(np.mean(batch_lr)) train_loss_values.append(np.mean(batch_loss)) if print_metrics: # Print progress every nth epoch to keep output to reasonable amount if(epoch % print_every == 0): print('Epoch {:02d} - step {} - cv acc: {:.3f} - train acc: {:.3f} (mean) - cv cost: {:.3f} - lr: {:.5f}'.format( epoch, step, np.mean(batch_cv_acc), np.mean(batch_acc), np.mean(batch_cv_cost), lr )) else: # update the plot ax[0].cla() ax[0].plot(valid_acc_values, color="red", label="Validation") ax[0].plot(train_acc_values, color="blue", label="Training") ax[0].set_title('Validation accuracy: {:.4f} (mean last 3)'.format(np.mean(valid_acc_values[-3:]))) # since we can't zoom in on plots like in tensorboard, scale y axis to give a decent amount of detail if np.mean(valid_acc_values[-3:]) > 0.85: ax[0].set_ylim([0.75,1.0]) elif np.mean(valid_acc_values[-3:]) > 0.75: ax[0].set_ylim([0.65,1.0]) elif np.mean(valid_acc_values[-3:]) > 0.65: ax[0].set_ylim([0.55,1.0]) elif np.mean(valid_acc_values[-3:]) > 0.55: ax[0].set_ylim([0.45,1.0]) ax[0].set_xlabel('Epoch') ax[0].set_ylabel('Accuracy') ax[0].legend() ax[1].cla() ax[1].plot(valid_cost_values, color="red", label="Validation") ax[1].plot(train_cost_values, color="blue", label="Training") ax[1].set_title('Validation xentropy: {:.3f} (mean last 3)'.format(np.mean(valid_cost_values[-3:]))) ax[1].set_xlabel('Epoch') ax[1].set_ylabel('Cross Entropy') ax[1].legend() ax[2].cla() ax[2].plot(train_lr_values) ax[2].set_title("Learning rate: {:.6f}".format(np.mean(train_lr_values[-1:]))) ax[2].set_xlabel("Epoch") ax[2].set_ylabel("Learning Rate") display.display(plt.gcf()) display.clear_output(wait=True) # Print data every 50th epoch so I can write it down to compare models if (not print_metrics) and (epoch % 50 == 0) and (epoch > 1): if(epoch % print_every == 0): print('Epoch {:02d} - step {} - cv acc: {:.3f} - train acc: {:.3f} (mean) - cv cost: {:.3f} - lr: {:.5f}'.format( epoch, step, np.mean(batch_cv_acc), np.mean(batch_acc), np.mean(batch_cv_cost), lr )) # print results of last epoch print('Epoch {} - cv acc: {:.4f} - train acc: {:.4f} (mean) - cv cost: {:.3f}'.format( epochs, np.mean(batch_cv_acc), np.mean(batch_acc), np.mean(batch_cv_cost) )) # save the session save_path = saver.save(sess, './model/cifar_'+model_name+'.ckpt') # init the test data array test_acc_values = [] # Check on the test data for X_batch, y_batch in get_batches(X_te, y_te, batch_size, crop=crop, distort=False): test_accuracy = sess.run(accuracy, feed_dict={ X: X_batch, y: y_batch, training: False }) test_acc_values.append(test_accuracy) # average test accuracy across batches test_acc = np.mean(test_acc_values) # show the plot plt.show() # print results of last epoch print('Epoch {} - cv acc: {:.4f} - train acc: {:.4f} (mean) - cv cost: {:.3f}'.format( epochs, np.mean(batch_cv_acc), np.mean(batch_acc), np.mean(batch_cv_cost) ))
Training model_7.13.4.7.7l ... Saving checkpoint Epoch 00 - step 391 - cv acc: 0.514 - train acc: 0.480 (mean) - cv cost: 1.403 - lr: 0.00300 Saving checkpoint Epoch 01 - step 782 - cv acc: 0.680 - train acc: 0.648 (mean) - cv cost: 0.928 - lr: 0.00300 Saving checkpoint Epoch 02 - step 1173 - cv acc: 0.710 - train acc: 0.710 (mean) - cv cost: 0.902 - lr: 0.00300 Saving checkpoint Epoch 03 - step 1564 - cv acc: 0.747 - train acc: 0.743 (mean) - cv cost: 0.733 - lr: 0.00300 Saving checkpoint Epoch 04 - step 1955 - cv acc: 0.753 - train acc: 0.764 (mean) - cv cost: 0.740 - lr: 0.00300 Saving checkpoint Epoch 05 - step 2346 - cv acc: 0.738 - train acc: 0.784 (mean) - cv cost: 0.802 - lr: 0.00300 Saving checkpoint Epoch 06 - step 2737 - cv acc: 0.763 - train acc: 0.797 (mean) - cv cost: 0.731 - lr: 0.00300 Saving checkpoint Epoch 07 - step 3128 - cv acc: 0.774 - train acc: 0.812 (mean) - cv cost: 0.689 - lr: 0.00300 Saving checkpoint Epoch 08 - step 3519 - cv acc: 0.789 - train acc: 0.819 (mean) - cv cost: 0.624 - lr: 0.00300 Saving checkpoint Epoch 09 - step 3910 - cv acc: 0.819 - train acc: 0.827 (mean) - cv cost: 0.553 - lr: 0.00270 Saving checkpoint Epoch 10 - step 4301 - cv acc: 0.818 - train acc: 0.841 (mean) - cv cost: 0.564 - lr: 0.00270 Saving checkpoint Epoch 11 - step 4692 - cv acc: 0.833 - train acc: 0.848 (mean) - cv cost: 0.510 - lr: 0.00270 Saving checkpoint Epoch 12 - step 5083 - cv acc: 0.838 - train acc: 0.854 (mean) - cv cost: 0.503 - lr: 0.00270 Saving checkpoint Epoch 13 - step 5474 - cv acc: 0.824 - train acc: 0.858 (mean) - cv cost: 0.555 - lr: 0.00270 Saving checkpoint Epoch 14 - step 5865 - cv acc: 0.820 - train acc: 0.861 (mean) - cv cost: 0.532 - lr: 0.00270 Saving checkpoint Epoch 15 - step 6256 - cv acc: 0.850 - train acc: 0.866 (mean) - cv cost: 0.455 - lr: 0.00270 Saving checkpoint Epoch 16 - step 6647 - cv acc: 0.848 - train acc: 0.870 (mean) - cv cost: 0.476 - lr: 0.00270 Saving checkpoint Epoch 17 - step 7038 - cv acc: 0.823 - train acc: 0.871 (mean) - cv cost: 0.551 - lr: 0.00270 Saving checkpoint Epoch 18 - step 7429 - cv acc: 0.849 - train acc: 0.875 (mean) - cv cost: 0.458 - lr: 0.00270 Saving checkpoint Epoch 19 - step 7820 - cv acc: 0.851 - train acc: 0.877 (mean) - cv cost: 0.501 - lr: 0.00243 Epoch 20 - cv acc: 0.8510 - train acc: 0.8768 (mean) - cv cost: 0.501 Epoch 20 - cv acc: 0.8510 - train acc: 0.8768 (mean) - cv cost: 0.501
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
Scoring and Evaluating trained model
## MODEL 7.20.0.11g print("Model : ", model_name) print("Convolutional network accuracy (test set):",test_acc, " Validation Set", valid_acc_values[-1])
Model : model_7.13.4.7.7l Convolutional network accuracy (test set): 0.84121096 Validation Set 0.8509766
MIT
Assignment 3_Tensorflow/CIFAR_10Xavier_initializer.ipynb
RajeshreeKale/CSYE
The Dice Problem This notebook is part of [Bite Size Bayes](https://allendowney.github.io/BiteSizeBayes/), an introduction to probability and Bayesian statistics using Python.Copyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) The following cell downloads `utils.py`, which contains some utility function we'll need.
from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://github.com/AllenDowney/BiteSizeBayes/raw/master/utils.py')
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
If everything we need is installed, the following cell should run with no error messages.
import pandas as pd import numpy as np from utils import values
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Review[In the previous notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/03_cookie.ipynb) we started with Bayes's Theorem, written like this:$P(A|B) = P(A) ~ P(B|A) ~/~ P(B)$And applied it to the case where we use data, $D$, to update the probability of a hypothesis, $H$. In this context, we write Bayes's Theorem like this:$P(H|D) = P(H) ~ P(D|H) ~/~ P(D)$And give each term a name:* $P(H)$ is the "prior probability" of the hypothesis, which represents how confident you are that $H$ is true prior to seeing the data,* $P(D|H)$ is the "likelihood" of the data, which is the probability of seeing $D$ if the hypothesis is true,* $P(D)$ is the "total probability of the data", that is, the chance of seeing $D$ regardless of whether $H$ is true or not.* $P(H|D)$ is the "posterior probability" of the hypothesis, which indicates how confident you should be that $H$ is true after taking the data into account.We used Bayes's Theorem to solve a cookie-related problem, and I presented the Bayes table, a way to solve Bayesian problems more generally. I didn't really explain how it works, though. That's the goal of this notebook.I'll start by extending the table method to a problem with more than two hypotheses. More hypothesesOne nice thing about the table method is that it works with more than two hypotheses. As an example, let's do another version of the cookie problem.Suppose you have five bowls:* Bowl 0 contains no vanilla cookies.* Bowl 1 contains 25% vanilla cookies.* Bowl 2 contains 50% vanilla cookies.* Bowl 3 contains 75% vanilla cookies.* Bowl 4 contains 100% vanilla cookies.Now suppose we choose a bowl at random and then choose a cookie, and we get a vanilla cookie. What is the posterior probability that we chose each bowl?Here's a table that represents the five hypotheses and their prior probabilities:
import pandas as pd table = pd.DataFrame() table['prior'] = 1/5, 1/5, 1/5, 1/5, 1/5 table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
The likelihood of drawing a vanilla cookie from each bowl is the given proportion of vanilla cookies:
table['likelihood'] = 0, 0.25, 0.5, 0.75, 1 table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Once we have priors and likelihoods, the remaining steps are always the same. We compute the unnormalized posteriors:
table['unnorm'] = table['prior'] * table['likelihood'] table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
And the total probability of the data.
prob_data = table['unnorm'].sum() prob_data
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Then divide through to get the normalized posteriors.
table['posterior'] = table['unnorm'] / prob_data table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Two things you might notice about these results:1. One of the hypotheses has a posterior probability of 0, which means it has been ruled out entirely. And that makes sense: Bowl 0 contains no vanilla cookies, so if we get a vanilla cookie, we know it's not from Bowl 0.2. The posterior probabilities form a straight line. We can see this more clearly by plotting the results.
import matplotlib.pyplot as plt table['posterior'].plot(kind='bar') plt.xlabel('Bowl #') plt.ylabel('Posterior probability');
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
**Exercise:** Use the table method to solve the following problem and plot the results as a bar chart.>The blue M&M was introduced in 1995. Before then, the color mix in a bag of plain M&Ms was (30% Brown, 20% Yellow, 20% Red, 10% Green, 10% Orange, 10% Tan). >>Afterward it was (24% Blue , 20% Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown).>>A friend of mine has two bags of M&Ms, and he tells me that one is from 1994 and one from 1996. He won't tell me which is which, but he gives me one M&M from each bag. One is yellow and one is green. What is the probability that the yellow M&M came from the 1994 bag?Hint: If the yellow came from 1994, the green must have come from 1996. By Theorem 2 (conjunction), the likelihood of this combination is (0.2)(0.2).
# Solution goes here # Solution goes here
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Why does this work?Now I will explain how the table method works, making two arguments:1. First, I'll show that it makes sense to normalize the posteriors so they add up to 1.2. Then I'll show that this step is consistent with Bayes's Theorem, because the total of the unnormalized posteriors is the total probability of the data, $P(D)$.Here's the first argument. Let's start with Bayes's Theorem:$P(H|D) = P(H) ~ P(D|H)~/~P(D)$Notice that the denominator, $P(D)$, does not depend on $H$, so it is the same for all hypotheses. If we factor it out, we get:$P(H|D) \sim P(H) ~ P(D|H)$which says that the posterior probabilities *are proportional to* the unnormalized posteriors. In other words, if we leave out $P(D)$, we get the proportions right, but not the total. Then how do we figure out the total? Well, in this example we know that the cookie came from exactly one of the bowls. So the hypotheses are:* Mutually exclusive, that is, only one of them can be true, and* Collectively exhaustive, that is, at least one of them must be true.Exactly one of the hypotheses must be true, so the posterior probabilities have to add up to 1. Most of the time, the unnormalized posteriors don't add up to 1, but when we divide through by the total, we ensure that the *normalized* posteriors do.That's the first argument. I hope it makes some sense, but if you don't find it entirely satisfying, keep going. Rolling the diceBefore I can make the second argument, we need one more law of probability, which I will explain with a new example:> Suppose you have a 4-sided die and a 6-sided die. You choose one at random and roll it. What is the probability of getting a 1?To answer that, I'll define two hypotheses and a datum:* $H_4$: You chose the 4-sided die.* $H_6$: You chose the 6-sided die.* $D$: You rolled a 1. On a 4-sided die, the probability of rolling 1 is $1/4$; on a 6-sided die it is $1/6$. So we can write the conditional probabilities:$P(D|H_4) = 1/4$$P(D|H_6) = 1/6$And if the probability of choosing either die is equal, we know the prior probabilities:$P(H_4) = 1/2$$P(H_6) = 1/2$ But what is the total probability of the data, $P(D)$?At this point your intuition might tell you that it is the weighted sum of the conditional probabilities:$P(D) = P(H_4)P(D|H_4) + P(H_6)P(D|H_6)$Which is$P(D) = (1/2)(1/4) + (1/2)(1/6)$Which is
(1/2)*(1/4) + (1/2)*(1/6)
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
And that's correct. But if your intuition did not tell you that, or if you would like to see something closer to a proof, keep going. DisjunctionIn this example, we can describe the outcome in terms of logical operators like this:> The outcome is 1 if you choose the 4-sided die **and** roll 1 **or** you roll the 6-sided die **and** roll 1.Using math notation, $D$ is true if:$(H_4 ~and~ D) ~or~ (H_6 ~and~ D)$We've already seen the $and$ operator, also known as "conjunction", but we have not yet seen the $or$ operator, which is also known as "disjunction"?For that, we a new rule, which I'll call **Theorem 4**:$P(A ~or~ B) = P(A) + P(B) - P(A ~and~ B)$ To see why that's true, let's take a look at the Venn diagram:What we want is the total of the blue, red, and purple regions. If we add $P(A)$ and $P(B)$, we get the blue and red regions right, but we double-count the purple region. So we have to subtract off one purple region, which is $P(A ~and~ B)$. **Exercise:** Let's do a quick example using disjunction. A standard deck of playing cards contains 52 cards; * 26 of them are red, * 12 of them are face cards, and * 6 of them are red face cards.The following diagram shows what I mean: the red rectangle contains the red cards; the blue rectangle contains the face cards, and the overlap includes the red face cards.<img width="500" src="https://github.com/AllenDowney/BiteSizeBayes/raw/master/card_venn_diagram.png">If we choose a card at random, here are the probabilities of choosing a red card, a face card, and a red face card:
p_red = 26/52 p_face = 12/52 p_red_face = 6/52 p_red, p_face, p_red_face
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Use Theorem 4 to compute the probability of choosing a card that is either red, or a face card, or both:
# Solution goes here
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Total probabilityIn the dice example, $H_4$ and $H_6$ are mutually exclusive, which means only one of them can be true, so the purple region is 0. Therefore:$P(D) = P(H_4 ~and~ D) + P(H_6 ~and~ D) - 0$Now we can use **Theorem 2** to replace the conjunctions with conditonal probabilities:$P(D) = P(H_4)~P(D|H_4) + P(H_6)~P(D|H_6)$By a similar argument, we can show that this is true for any number of hypotheses. For example, if we add an 8-sided die to the mix, we can write:$P(D) = P(H_4)~P(D|H_4) + P(H_6)~P(D|H_6) + P(H_8)~P(D|H_8)$ And more generally, with any number of hypotheses $H_i$:$P(D) = \sum_i P(H_i)~P(D|H_i)$Which shows that the total probability of the data is the sum of the unnormalized posteriors.And that's why the table method works. Now let's get back to the original question:> Suppose you have a 4-sided die and a 6-sided die. You choose one at random and roll it. What is the probability of getting a 1?We can use a Bayes table to compute the answer. Here are the priors:
table = pd.DataFrame(index=['H4', 'H6']) table['prior'] = 1/2, 1/2 table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
And the likelihoods:
table['likelihood'] = 1/4, 1/6 table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Now we compute the unnormalized posteriors in the usual way:
table['unnorm'] = table['prior'] * table['likelihood'] table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
And the total probability of the data:
prob_data = table['unnorm'].sum() prob_data
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
That's what we got when we solved the problem by hand, so that's good. **Exercise:** Suppose you have a 4-sided, 6-sided, and 8-sided die. You choose one at random and roll it, what is the probability of getting a 1?Do you expect it to be higher or lower than in the previous example?
# Solution goes here # Solution goes here
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Prediction and inferenceIn the previous section, we use a Bayes table to solve this problem:> Suppose you have a 4-sided die and a 6-sided die. You choose one at random and roll it. What is the probability of getting a 1?I'll call this a "prediction problem" because we are given a scenario and asked for the probability of a predicted outcome.Now let's solve a closely-related problem:> Suppose you have a 4-sided die and a 6-sided die. You choose one at random, roll it, and get a 1. What is the probability that the die you rolled is 4-sided?I'll call this an "inference problem" because we are given the outcome and asked to figure out, or "infer", which die was rolled.Here's a solution:
table = pd.DataFrame(index=['H4', 'H6']) table['prior'] = 1/2, 1/2 table['likelihood'] = 1/4, 1/6 table['unnorm'] = table['prior'] * table['likelihood'] prob_data = table['unnorm'].sum() table['posterior'] = table['unnorm'] / prob_data table
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Given that the outcome is a 1, there is a 60% chance the die you rolled was 4-sided.As this example shows, prediction and inference closely-related problems, and we can use the same methods for both. **Exercise:** Let's add some more dice:1. Suppose you have a 4-sided, 6-sided, 8-sided, and 12-sided die. You choose one at random and roll it. What is the probability of getting a 1?2. Now suppose the outcome is a 1. What is the probability that the die you rolled is 4-sided? And what are the posterior probabilities for the other dice?
# Solution goes here # Solution goes here
_____no_output_____
MIT
04_dice.ipynb
jonathonfletcher/BiteSizeBayes
Copyright 2020 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
tensorflow_privacy/privacy/membership_inference_attack/codelab.ipynb
LuluBeatson/privacy