markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
VisualizingIf you're in a Jupyter notebook, use displacy.render .Otherwise, use displacy.serve to start a web server andshow the visualization in your browser.
from IPython.display import display, SVG from spacy import displacy
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Visualize dependencies
doc = nlp("This is a sentence") diagram = displacy.render(doc, style="dep") display(SVG(diagram))
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Visualize named entities
doc = nlp("Larry Page founded Google") diagram = displacy.render(doc, style="ent") display(SVG(diagram))
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Word vectors and similarityTo use word vectors, you need to install the larger modelsending in `md` or `lg` , for example `en_core_web_lg` . Comparing similarity
doc1 = nlp("I like cats") doc2 = nlp("I like dogs") # Compare 2 documents print(doc1.similarity(doc2)) # Compare 2 tokens print(doc1[2].similarity(doc2[2])) # Compare tokens and spans print(doc1[0].similarity(doc2[1:3]))
0.9133257426978459 0.7518883 0.19759766442466106
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Accessing word vectors
# Vector as a numpy array doc = nlp("I like cats") print(doc[2].vector.shape) # The L2 norm of the token's vector print(doc[2].vector_norm)
(384,) 24.809391
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Pipeline componentsFunctions that take a `Doc` object, modify it and return it. `Text` --> | `tokenizer`, `tagger`, `parser`, `ner`, ... | --> `Doc` Pipeline information
nlp = spacy.load("en_core_web_sm") print(nlp.pipe_names) print(nlp.pipeline)
['tagger', 'parser', 'ner'] [('tagger', <spacy.pipeline.Tagger object at 0x7f7972874ef0>), ('parser', <spacy.pipeline.DependencyParser object at 0x7f79728cb150>), ('ner', <spacy.pipeline.EntityRecognizer object at 0x7f797282c4c0>)]
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Custom components
# Function that modifies the doc and returns it def custom_component(doc): print("Do something to the doc here!") return doc # Add the component first in the pipeline nlp.add_pipe(custom_component, first=True)
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Components can be added `first` , `last` (default), or `before` or `after` an existing component. Extension attributesCustom attributes that are registered on the global `Doc` , `Token` and `Span` classes and become available as `.[link text](https://)_`
import os os._exit(00) from spacy.tokens import Doc, Token, Span import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("The sky over New York is blue")
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Attribute extensions *WITH DEFAULT VALUE*
# Register custom attribute on Token class Token.set_extension("is_color", default=False) # Overwrite extension attribute with default value doc[6]._.is_color = True
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Property extensions *WITH GETTER & SETTER*
# Register custom attribute on Doc class get_reversed = lambda doc: doc.text[::-1] Doc.set_extension("reversed", getter=get_reversed) # Compute value of extension attribute with getter doc._.reversed
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Method extensions *CALLABLE METHOD*
# Register custom attribute on Span class has_label = lambda span, label: span.label_ == label Span.set_extension("has_label", method=has_label) # Compute value of extension attribute with method doc[3:5]._.has_label("GPE")
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Rule-based matching Using the matcher
# Matcher is initialized with the shared vocab from spacy.matcher import Matcher # Each dict represents one token and its attributes matcher = Matcher(nlp.vocab) # Add with ID, optional callback and pattern(s) pattern = [{"LOWER": "new"}, {"LOWER": "york"}] matcher.add("CITIES", None, pattern) # Match by calling the matcher on a Doc object doc = nlp("I live in New York") matches = matcher(doc) # Matches are (match_id, start, end) tuples for match_id, start, end in matches: # Get the matched span by slicing the Doc span = doc[start:end] print(span.text)
New York
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Rule-based matching Token patterns
# "love cats", "loving cats", "loved cats" pattern1 = [{"LEMMA": "love"}, {"LOWER": "cats"}] # "10 people", "twenty people" pattern2 = [{"LIKE_NUM": True}, {"TEXT": "people"}] # "book", "a cat", "the sea" (noun + optional article) pattern3 = [{"POS": "DET", "OP": "?"}, {"POS": "NOUN"}]
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Operators and quantifiers Can be added to a token dict as the `"OP"` key* `!` Negate pattern and match **exactly 0 times**.* `?` Make pattern optional and match **0 or 1 times**.* `+` Require pattern to match **1 or more times**.* `*` Allow pattern to match **0 or more times**. Glossary| | ||---|---|| Tokenization | Segmenting text into words, punctuation etc || Lemmatization | Assigning the base forms of words, for example:"was" → "be" or "rats" → "rat". || Sentence Boundary Detection | Finding and segmenting individual sentences || Part-of-speech (POS) Tagging | Assigning word types to tokens like verb or noun || Dependency Parsing | Assigning syntactic dependency labels describing the relations between individual tokens, like subject or object. || Named Entity Recognition (NER) | Labeling named "real-world" objects, like persons, companies or locations. || Text Classification | Assigning categories or labels to a whole document, or parts of a document. || Statistical model | Process for making predictions based on examples || Training | Updating a statistical model with new examples. |
_____no_output_____
MIT
notebook.ipynb
abhiWriteCode/Tutorial-for-spaCy
Make gifIn this example, we load in a single subject example, remove electrodes that exceeda kurtosis threshold (in place), load a model, and predict activity at allmodel locations. We then convert the reconstruction to a nifti and plot 3 consecutive timepointsfirst with the plot_glass_brain and then create .png files and compile as a gif.
# Code source: Lucy Owen & Andrew Heusser # License: MIT # load import supereeg as se # load example data bo = se.load('example_data') # load example model model = se.load('example_model') # the default will replace the electrode location with the nearest voxel and reconstruct at all other locations reconstructed_bo = model.predict(bo) # print out info on new brain object reconstructed_bo.info() # convert to nifti reconstructed_nifti = reconstructed_bo.to_nii(template='gray', vox_size=20) # make gif, default time window is 0 to 10, but you can specifiy by setting a range with index # reconstructed_nifti.make_gif('/your/path/to/gif/', index=np.arange(100), name='sample_gif')
_____no_output_____
MIT
docs/auto_examples/make_gif.ipynb
tmuntianu/supereeg
"Loss Functions: Cross Entropy Loss and You!"> "Meet multi-classification's favorite loss function"- toc: true - badges: true- comments: true- author: Wayde Gilliam- image: images/articles/understanding-cross-entropy-loss-logo.png
# only run this cell if you are in collab !pip install fastai import torch from torch.nn import functional as F from fastai2.vision.all import *
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
We've been doing multi-classification since week one, and last week, we learned about how a NN "learns" by evaluating its predictions as measured by something called a "loss function." So for multi-classification tasks, what is our loss function?
path = untar_data(URLs.PETS)/'images' def is_cat(x): return x[0].isupper() dls = ImageDataLoaders.from_name_func( path, get_image_files(path), valid_pct=0.2, seed=42, label_func=is_cat, item_tfms=Resize(224)) learn = cnn_learner(dls, resnet34, metrics=error_rate) learn.loss_func
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
Negative Log-Likelihood & CrossEntropy LossTo understand `CrossEntropyLoss`, we need to first understand something called `Negative Log-Likelihood` Negative Log-Likelihood (NLL) Loss Let's imagine a model who's objective is to predict the label of an example given five possible classes to choose from. Our predictions might look like this ...
preds = torch.randn(3, 5); preds
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
Because this is a supervised task, we know the actual labels of our three training examples above (e.g., the label of the first example is the first class, the label of the 2nd example the 4th class, and so forth)
targets = torch.tensor([0, 3, 4])
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
**Step 1**: Convert the predictions for each example into probabilities using `softmax`. This describes how confident your model is in predicting what it belongs to respectively for each class
probs = F.softmax(preds, dim=1); probs
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
If we sum the probabilities across each example, you'll see they add up to 1
probs.sum(dim=1)
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
**Step 2**: Calculate the "negative log likelihood" for each example where `y` = the probability of the correct class`loss = -log(y)`We can do this in one-line using something called ***tensor/array indexing***
example_idxs = range(len(preds)); example_idxs correct_class_probs = probs[example_idxs, targets]; correct_class_probs nll = -torch.log(correct_class_probs); nll
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
**Step 3**: The loss is the mean of the individual NLLs
nll.mean()
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
... or using PyTorch
F.nll_loss(torch.log(probs), targets)
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
Cross Entropy Loss ... or we can do this all at once using PyTorch's `CrossEntropyLoss`
F.cross_entropy(preds, targets)
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
As you can see, cross entropy loss simply combines the `log_softmax` operation with the `negative log-likelihood` loss So why not use accuracy?
# this function is actually copied verbatim from the utils package in fastbook (see footnote 1) def plot_function(f, tx=None, ty=None, title=None, min=-2, max=2, figsize=(6,4)): x = torch.linspace(min,max) fig,ax = plt.subplots(figsize=figsize) ax.plot(x,f(x)) if tx is not None: ax.set_xlabel(tx) if ty is not None: ax.set_ylabel(ty) if title is not None: ax.set_title(title) def f(x): return -torch.log(x) plot_function(f, 'x (prob correct class)', '-log(x)', title='Negative Log-Likelihood', min=0, max=1)
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
NLL loss will be higher the smaller the probability *of the correct class***What does this all mean?** The lower the confidence it has in predicting the correct class, the higher the loss. It will:1) Penalize correct predictions that it isn't confident about more so than correct predictions it is very confident about.2) And vice-versa, it will penalize incorrect predictions it is very confident about more so than incorrect predictions it isn't very confident about**Why is this better than accuracy?**Because accuracy simply tells you whether you got it right or wrong (a 1 or a 0), whereast NLL incorporates the confidence as well. That information provides you're model with a much better insight w/r/t to how well it is really doing in a single number (INF to 0), resulting in gradients that the model can actually use!*Rember that a loss function returns a number.* That's it!Or the more technical explanation from fastbook:>"The gradient of a function is its slope, or its steepness, which can be defined as rise over run -- that is, how much the value of function goes up or down, divided by how much you changed the input. We can write this in maths: `(y_new-y_old) / (x_new-x_old)`. Specifically, it is defined when `x_new` is very similar to `x_old`, meaning that their difference is very small. **But accuracy only changes at all when a prediction changes from a 3 to a 7, or vice versa.** So the problem is that a small change in weights from `x_old` to `x_new` isn't likely to cause any prediction to change, so `(y_new - y_old)` will be zero. **In other words, the gradient is zero almost everywhere.**>As a result, **a very small change in the value of a weight will often not actually change the accuracy at all**. This means it is not useful to use accuracy as a loss function. When we use accuracy as a loss function, most of the time our gradients will actually be zero, and the model will not be able to learn from that number. That is not much use at all!" {% fn 1 %} Summary So to summarize, `accuracy` is a great metric for human intutition but not so much for your your model. If you're doing multi-classification, your model will do much better with something that will provide it gradients it can actually use in improving your parameters, and that something is `cross-entropy loss`. References1. https://pytorch.org/docs/stable/nn.htmlcrossentropyloss2. http://wiki.fast.ai/index.php/Log_Loss3. https://ljvmiranda921.github.io/notebook/2017/08/13/softmax-and-the-negative-log-likelihood/4. https://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.htmlcross-entropy5. https://machinelearningmastery.com/loss-and-loss-functions-for-training-deep-learning-neural-networks/ {{ 'fastbook [chaper 4](https://github.com/fastai/fastbook/blob/dc1bf74f2639aa39b16461f20406587baccb13b3/04_mnist_basics.ipynb)' | fndetail: 1 }}
_____no_output_____
Apache-2.0
_notebooks/2020-04-04-understanding-cross-entropy-loss.ipynb
XCS224U-Spring2021-TeamTextSumm/ohmeow_website
- A stationary time series is one whose statistical properties such as mean, variance, autocorrelation, etc. are all constant over time. Most statistical forecasting methods are based on the assumption that the time series can be rendered approximately stationary (i.e., "stationarized") through the use of mathematical transformations. A stationarized series is relatively easy to predict: you simply predict that its statistical properties will be the same in the future as they have been in the past! - We can check stationarity using the following:- - Plotting Rolling Statistics: We can plot the moving average or moving variance and see if it varies with time. This is more of a visual technique.- - Dickey-Fuller Test: This is one of the statistical tests for checking stationarity. Here the null hypothesis is that the TimeSeries is non-stationary. The test results comprise of a Test Statistic and some Critical Values for difference confidence levels. If the ‘Test Statistic’ is less than the ‘Critical Value’, we can reject the null hypothesis and say that the series is stationary.
from statsmodels.tsa.stattools import adfuller def test_stationary(timeseries): #Determing rolling statistics moving_average=timeseries.rolling(window=12).mean() standard_deviation=timeseries.rolling(window=12).std() #Plot rolling statistics: plt.plot(timeseries,color='blue',label="Original") plt.plot(moving_average,color='red',label='Mean') plt.plot(standard_deviation,color='black',label='Standard Deviation') plt.legend(loc='best') #best for axes plt.title('Rolling Mean & Deviation') # plt.show() plt.show(block=False) #Perform Dickey-Fuller test: print('Results Of Dickey-Fuller Test') tstest=adfuller(timeseries['MONSOON'],autolag='AIC') tsoutput=pd.Series(tstest[0:4],index=['Test Statistcs','P-value','#Lags used',"#Obs. used"]) #Test Statistics should be less than the Critical Value for Stationarity #lesser the p-value, greater the stationarity # print(list(dftest)) for key,value in tstest[4].items(): tsoutput['Critical Value (%s)'%key]=value print((tsoutput)) test_stationary(indexedDataset)
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
- There are 2 major reasons behind non-stationaruty of a TS:- - Trend – varying mean over time. For eg, in this case we saw that on average, the number of passengers was growing over time.- - Seasonality – variations at specific time-frames. eg people might have a tendency to buy cars in a particular month because of pay increment or festivals. Indexed Dataset Logscale
indexedDataset_logscale=np.log(indexedDataset) test_stationary(indexedDataset_logscale)
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
Dataset Log Minus Moving Average (dl_ma)
rolmeanlog=indexedDataset_logscale.rolling(window=12).mean() dl_ma=indexedDataset_logscale-rolmeanlog dl_ma.head(12) dl_ma.dropna(inplace=True) dl_ma.head(12) test_stationary(dl_ma)
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
Exponential Decay Weighted Average (edwa)
edwa=indexedDataset_logscale.ewm(halflife=12,min_periods=0,adjust=True).mean() plt.plot(indexedDataset_logscale) plt.plot(edwa,color='red')
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
Dataset Logscale Minus Moving Exponential Decay Average (dlmeda)
dlmeda=indexedDataset_logscale-edwa test_stationary(dlmeda)
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
Eliminating Trend and Seasonality - Differencing – taking the differece with a particular time lag- Decomposition – modeling both trend and seasonality and removing them from the model. Differencing Dataset Log Div Shifting (dlds)
#Before Shifting indexedDataset_logscale.head() #After Shifting indexedDataset_logscale.shift().head() dlds=indexedDataset_logscale-indexedDataset_logscale.shift() dlds.dropna(inplace=True) test_stationary(dlds)
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
Decomposition
from statsmodels.tsa.seasonal import seasonal_decompose decompostion= seasonal_decompose(indexedDataset_logscale,freq=10) trend=decompostion.trend seasonal=decompostion.seasonal residual=decompostion.resid plt.subplot(411) plt.plot(indexedDataset_logscale,label='Original') plt.legend(loc='best') plt.subplot(412) plt.plot(trend,label='Trend') plt.legend(loc='best') plt.subplot(413) plt.plot(seasonal,label='Seasonal') plt.legend(loc='best') plt.subplot(414) plt.plot(residual,label='Residual') plt.legend(loc='best') plt.tight_layout() #To Show Multiple Grpahs In One Output, Use plt.subplot(abc)
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
- Here trend, seasonality are separated out from data and we can model the residuals. Lets check stationarity of residuals:
decomposedlogdata=residual decomposedlogdata.dropna(inplace=True) test_stationary(decomposedlogdata)
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
Forecasting a Time Series - ARIMA stands for Auto-Regressive Integrated Moving Averages. The ARIMA forecasting for a stationary time series is nothing but a linear (like a linear regression) equation. The predictors depend on the parameters (p,d,q) of the ARIMA model:- - Number of AR (Auto-Regressive) terms (p): AR terms are just lags of dependent variable. For instance if p is 5, the predictors for x(t) will be x(t-1)….x(t-5).- - Number of MA (Moving Average) terms (q): MA terms are lagged forecast errors in prediction equation. For instance if q is 5, the predictors for x(t) will be e(t-1)….e(t-5) where e(i) is the difference between the moving average at ith instant and actual value.- - Number of Differences (d): These are the number of nonseasonal differences, i.e. in this case we took the first order difference. So either we can pass that variable and put d=0 or pass the original variable and put d=1. Both will generate same results.- An importance concern here is how to determine the value of ‘p’ and ‘q’. We use two plots to determine these numbers.- - Autocorrelation Function (ACF): It is a measure of the correlation between the the TS with a lagged version of itself-. For instance at lag 5, ACF would compare series at time instant ‘t1’…’t2’ with series at instant ‘t1-5’…’t2-5’ (t1-5 and t2 being end points).- - Partial Autocorrelation Function (PACF): This measures the correlation between the TS with a lagged version of itself but after eliminating the variations already explained by the intervening comparisons. Eg at lag 5, it will check the correlation but remove the effects already explained by lags 1 to 4. ACF & PACF Plots
from statsmodels.tsa.stattools import acf,pacf lag_acf=acf(dlds,nlags=20) lag_pacf=pacf(dlds,nlags=20,method='ols') plt.subplot(121) plt.plot(lag_acf) plt.axhline(y=0, linestyle='--',color='gray') plt.axhline(y=1.96/np.sqrt(len(dlds)),linestyle='--',color='gray') plt.axhline(y=-1.96/np.sqrt(len(dlds)),linestyle='--',color='gray') plt.title('AutoCorrelation Function') plt.subplot(122) plt.plot(lag_pacf) plt.axhline(y=0, linestyle='--',color='gray') plt.axhline(y=1.96/np.sqrt(len(dlds)),linestyle='--',color='gray') plt.axhline(y=-1.96/np.sqrt(len(dlds)),linestyle='--',color='gray') plt.title('PartialAutoCorrelation Function') plt.tight_layout()
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
- In this plot, the two dotted lines on either sides of 0 are the confidence interevals. These can be used to determine the ‘p’ and ‘q’ values as:- - p – The lag value where the PACF chart crosses the upper confidence interval for the first time. If we notice closely, in this case p=2.- - q – The lag value where the ACF chart crosses the upper confidence interval for the first time. If we notice closely, in this case q=2.
from statsmodels.tsa.arima_model import ARIMA model=ARIMA(indexedDataset_logscale,order=(5,1,0)) results_AR=model.fit(disp=-1) plt.plot(dlds) plt.plot(results_AR.fittedvalues,color='red') plt.title('RSS: %.4f'%sum((results_AR.fittedvalues-dlds['MONSOON'])**2)) print('Plotting AR Model') model = ARIMA(indexedDataset_logscale, order=(0, 1, 2)) #0,1,2 results_MA = model.fit(disp=-1) plt.plot(dlds) plt.plot(results_MA.fittedvalues, color='red') plt.title('RSS: %.4f'%sum((results_MA.fittedvalues-dlds['MONSOON'])**2)) print('Plotting MA Model') model = ARIMA(indexedDataset_logscale, order=(5, 1, 2)) results_ARIMA = model.fit(disp=-1) plt.plot(dlds) plt.plot(results_ARIMA.fittedvalues, color='red') plt.title('RSS: %.4f'%sum((results_ARIMA.fittedvalues-dlds['MONSOON'])**2)) print('Plotting Combined Model')
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
Taking it back to original scale from residual scale
#storing the predicted results as a separate series predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True) predictions_ARIMA_diff.head()
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
- Notice that these start from ‘1949-02-01’ and not the first month. Why? This is because we took a lag by 1 and first element doesn’t have anything before it to subtract from. The way to convert the differencing to log scale is to add these differences consecutively to the base number. An easy way to do it is to first determine the cumulative sum at index and then add it to the base number.
#convert to cummuative sum predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum() predictions_ARIMA_diff_cumsum predictions_ARIMA_log = pd.Series(indexedDataset_logscale['MONSOON'].ix[0], index=indexedDataset_logscale.index) predictions_ARIMA_log predictions_ARIMA_log = predictions_ARIMA_log.add(predictions_ARIMA_diff_cumsum,fill_value=0) predictions_ARIMA_log
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
- Here the first element is base number itself and from there on the values cumulatively added.
#Last step is to take the exponent and compare with the original series. predictions_ARIMA = np.exp(predictions_ARIMA_log) plt.plot(indexedDataset) plt.plot(predictions_ARIMA) plt.title('RMSE: %.4f'% np.sqrt(sum((predictions_ARIMA-indexedDataset['MONSOON'])**2)/len(indexedDataset)))
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
- Finally we have a forecast at the original scale.
results_ARIMA.plot_predict(1,26) #start = !st month #end = 10yrs forcasting = 144+12*10 = 264th month #Two models corresponds to AR & MA x=results_ARIMA.forecast(steps=5) print(x) #values in residual equivalent for i in range(0,5): print(x[0][i],end='') print('\t',x[1][i],end='') print('\t',x[2][i]) np.exp(results_ARIMA.forecast(steps=5)[0]) predictions_ARIMA_diff = pd.Series(results_ARIMA.forecast(steps=5)[0], copy=True) predictions_ARIMA_diff.head() predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum() predictions_ARIMA_diff_cumsum.head() predictions_ARIMA_log=[] for i in range(0,len(predictions_ARIMA_diff_cumsum)): predictions_ARIMA_log.append(predictions_ARIMA_diff_cumsum[i]+3.411478) predictions_ARIMA_log #Last step is to take the exponent and compare with the original series. predictions_ARIMA = np.exp(predictions_ARIMA_log) plt.subplot(121) plt.plot(indexedDataset) plt.subplot(122) plt.plot(predictions_ARIMA) plt.tight_layout() # plt.title('RMSE: %.4f'% np.sqrt(sum((predictions_ARIMA-indexedDataset['MONSOON'])**2)/len(indexedDataset))) np.exp(predictions_ARIMA_log)
_____no_output_____
MIT
ML-Predictions/.ipynb_checkpoints/Water-Level TSF - Copy (3)-checkpoint.ipynb
romilshah525/SIH-2019
第10题(index=9)正确答案分布
x=[str(['02_09', '05_06']),str(['02_05', '06_09']),str(['02_06', '05_09'])] data1 = go.Bar(x = x, y = [480/662,68/662,39/662], name = 'all') data2 = go.Bar(x = x, y = [158/267,37/267,4/267], name = 'junior') data3 = go.Bar(x = x, y = [322/395,31/395,24/395], name = 'senior') layout={"title": "第10题(index=9)的正确答案分布,浇花(1)", "xaxis_title": "答案编码", "yaxis_title": "分布率", # x轴坐标倾斜60度 "xaxis": {"tickangle": 0} } fig = go.Figure(data=[data1, data2, data3],layout=layout) plot(fig,filename="./plot/plot_problem_浇花(1)_accuracy.html",auto_open=False,image='png',image_height=800,image_width=1500) offline.iplot(fig) x=[str(['2_16', '6_15', '12_14']),str(['2_15', '6_16', '12_14']),str(['2_12', '6_14', '15_16'])\ ,str(['2_12', '6_15', '14_16']),str(['2_14', '6_12', '15_16']),str(['2_16', '6_15', '12_14'])\ ,str(['2_16', '6_14', '12_15']),str(['2_15', '6_14', '12_16']),str(['2_14', '6_16', '12_15'])\ ,str(['2_15', '6_12', '14_16']),str(['2_6', '12_14', '15_16']),str(['2_6', '12_16', '14_15'])\ ,str(['2_6', '12_15', '14_16']),str(['2_16', '6_12', '14_15']),str(['2_12', '6_16', '14_15'])] data1 = go.Bar(x = x, y = np.array([225,60,51\ ,29,25,33\ ,15,15,8\ ,7,3,2\ ,0,0,0])/data_entity.row_num, name = 'all') data2 = go.Bar(x = x, y = np.array([63,19,26\ ,4,15,7\ ,4,6,2\ ,2,0,0\ ,0,0,0])/data_entity_junior.row_num, name = 'junior') data3 = go.Bar(x = x, y = np.array([162,41,25\ ,25,10,15\ ,11,9,6\ ,5,3,2\ ,0,0,0])/data_entity_senior.row_num, name = 'senior') layout={"title": "第11题(index=10)的正确答案分布,浇花(2)", "xaxis_title": "答案编码", "xaxis_range": [0,6000], "yaxis_title": "分布率", # x轴坐标倾斜60度 "xaxis": {"tickangle": 60} } fig = go.Figure(data=[data1, data2, data3],layout=layout) plot(fig,filename="./plot/plot_problem_浇花(2)_accuracy.html",auto_open=False,image='png',image_height=800,image_width=1500) offline.iplot(fig) list(np.array([1,2,3])/10) ### plot index =18 df_18 = pd.read_excel('./output/default/18_count.xlsx') verify_list = list(pd.read_excel('./output/default/18_count.xlsx').iloc[:,1]) not_in_list = [] for i in range(8): if int(bin(i)[2:].replace('1','2')) not in verify_list: not_in_list.append(int(bin(i)[2:].replace('1','2'))) # print(str(int(bin(i)[2:].replace('1','2'))).zfill(6)) defult_pd = list(pd.read_excel('./output/default/18_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) x = [str(int(float)).zfill(3) for float in list(defult_pd.loc[:,'list'])] x += not_in_list x defult_pd = list(pd.read_excel('./output/default/18_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) defult_pd = pd.DataFrame(index = [str(int(float)).zfill(3) for float in list(defult_pd.loc[:,'list'])],data={'count':list(defult_pd.loc[:,'count']),'success':list(defult_pd.loc[:,'success'])}) junior_pd = list(pd.read_excel('./output/junior/18_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) junior_pd = pd.DataFrame(index = [str(int(float)).zfill(3) for float in list(junior_pd.loc[:,'list'])],data={'count':list(junior_pd.loc[:,'count']),'success':list(junior_pd.loc[:,'success'])}) senior_pd = list(pd.read_excel('./output/senior/18_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) senior_pd = pd.DataFrame(index = [str(int(float)).zfill(3) for float in list(senior_pd.loc[:,'list'])],data={'count':list(senior_pd.loc[:,'count']),'success':list(senior_pd.loc[:,'success'])}) y1 =[] y2 = [] y3 = [] for ans in x: if ans in list(defult_pd.index): y1.append(defult_pd.loc[ans,'count']) else: y1.append(0) for ans in x: if ans in list(junior_pd.index): y2.append(junior_pd.loc[ans,'count']) else: y2.append(0) for ans in x: if ans in list(senior_pd.index): y3.append(senior_pd.loc[ans,'count']) else: y3.append(0) #### plot index 18 data1 = go.Bar(x = x, y = np.array(y1)/data_entity.row_num, name = 'all') data2 = go.Bar(x = x, y = np.array(y2)/data_entity_junior.row_num, name = 'junior') data3 = go.Bar(x = x, y = np.array(y3)/data_entity_senior.row_num, name = 'senior') layout={"title": "第16题(index=18)的正确答案分布,供水系统(1)", "xaxis_title": "答案编码", "yaxis_title": "分布率", # x轴坐标倾斜60度 "xaxis": {"tickangle": 60} } fig = go.Figure(data=[data1, data2, data3],layout=layout) plot(fig,filename="./plot/plot_problem_供水系统(1)_accuracy.html",auto_open=False,image='png',image_height=800,image_width=1500) offline.iplot(fig) df_19 = pd.read_excel('./output/default/19_count.xlsx') verify_list = list(pd.read_excel('./output/default/19_count.xlsx').iloc[:,1]) not_in_list = [] for i in range(64): if int(bin(i)[2:].replace('1','2')) not in verify_list: not_in_list.append(int(bin(i)[2:].replace('1','2'))) # print(str(int(bin(i)[2:].replace('1','2'))).zfill(6)) defult_pd = list(pd.read_excel('./output/default/19_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) x = [str(int(float)).zfill(6) for float in list(defult_pd.loc[:,'list'])] x += not_in_list defult_pd = list(pd.read_excel('./output/default/19_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) defult_pd = pd.DataFrame(index = [str(int(float)).zfill(6) for float in list(defult_pd.loc[:,'list'])],data={'count':list(defult_pd.loc[:,'count']),'success':list(defult_pd.loc[:,'success'])}) junior_pd = list(pd.read_excel('./output/junior/19_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) junior_pd = pd.DataFrame(index = [str(int(float)).zfill(6) for float in list(junior_pd.loc[:,'list'])],data={'count':list(junior_pd.loc[:,'count']),'success':list(junior_pd.loc[:,'success'])}) senior_pd = list(pd.read_excel('./output/senior/19_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) senior_pd = pd.DataFrame(index = [str(int(float)).zfill(6) for float in list(senior_pd.loc[:,'list'])],data={'count':list(senior_pd.loc[:,'count']),'success':list(senior_pd.loc[:,'success'])}) y1 =[] y2 = [] y3 = [] for ans in x: if ans in list(defult_pd.index): y1.append(defult_pd.loc[ans,'count']) else: y1.append(0) for ans in x: if ans in list(junior_pd.index): y2.append(junior_pd.loc[ans,'count']) else: y2.append(0) for ans in x: if ans in list(senior_pd.index): y3.append(senior_pd.loc[ans,'count']) else: y3.append(0) #### plot index 19 data1 = go.Bar(x = x, y = np.array(y1)/data_entity.row_num, name = 'all') data2 = go.Bar(x = x, y = np.array(y2)/data_entity_junior.row_num, name = 'junior') data3 = go.Bar(x = x, y = np.array(y3)/data_entity_senior.row_num, name = 'senior') layout={"title": "第17题(index=19)的正确答案分布,供水系统(2)", "xaxis_title": "答案编码", "yaxis_title": "分布率", # x轴坐标倾斜60度 "xaxis": {"tickangle": 60} } fig = go.Figure(data=[data1, data2, data3],layout=layout) plot(fig,filename="./plot/plot_problem_供水系统(2)_accuracy.html",auto_open=False,image='png',image_height=800,image_width=1500) offline.iplot(fig)
_____no_output_____
Apache-2.0
first_analysis/plot.ipynb
Brook1711/openda1
寻找第19题所有的正确答案
# 所有可能的五角星(0,a)和三角形(1,b)组合 seq_list = [] for i in range(8): for j in range(int(math.pow(2,i+1))): temp=str(bin(j))[2:].zfill(i+1).replace('0','a') temp = temp.replace('1', 'b') seq_list.append(temp) # 所有可能的长方形(1)和圆形(0)组合 trans_list = [] for i in range(3): for j in range(int(math.pow(2,i+1))): trans_list.append(str(bin(j))[2:].zfill(i+1)) # 正确序列 verify_str = '10100010010' right_ans = [] cnt = 0 for seq in seq_list: for star in trans_list: for trian in trans_list: cnt +=1 if seq.replace('a', star).replace('b', trian) == verify_str: right_ans.append([seq.replace('a','0').replace('b','1'), star, trian]) right_ans ### plot index =22 defult_pd = list(pd.read_excel('./output/default/22_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) x = [str_ for str_ in list(defult_pd.loc[:,'list'])] x defult_pd = list(pd.read_excel('./output/default/22_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) defult_pd = pd.DataFrame(index = [str_ for str_ in list(defult_pd.loc[:,'list'])],data={'count':list(defult_pd.loc[:,'count']),'success':list(defult_pd.loc[:,'success'])}) junior_pd = list(pd.read_excel('./output/junior/22_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) junior_pd = pd.DataFrame(index = [str_ for str_ in list(junior_pd.loc[:,'list'])],data={'count':list(junior_pd.loc[:,'count']),'success':list(junior_pd.loc[:,'success'])}) senior_pd = list(pd.read_excel('./output/senior/22_count.xlsx').groupby('success'))[1][1].sort_values(by = ['count'], ascending = False) senior_pd = pd.DataFrame(index = [str_ for str_ in list(senior_pd.loc[:,'list'])],data={'count':list(senior_pd.loc[:,'count']),'success':list(senior_pd.loc[:,'success'])}) data1 = go.Bar(x = x, y = np.array(y1)/data_entity.row_num, name = 'all') data2 = go.Bar(x = x, y = np.array(y2)/data_entity_junior.row_num, name = 'junior') data3 = go.Bar(x = x, y = np.array(y3)/data_entity_senior.row_num, name = 'senior') layout={"title": "第19题(index=22)的正确答案分布,对应的形状(2)", "xaxis_title": "答案编码", "yaxis_title": "分布率", # x轴坐标倾斜60度 "xaxis": {"tickangle": 20} } fig = go.Figure(data=[data1, data2, data3],layout=layout) plot(fig,filename="./plot/plot_problem_对应的形状(2)_accuracy.html",auto_open=False,image='png',image_height=800,image_width=1500) offline.iplot(fig)
_____no_output_____
Apache-2.0
first_analysis/plot.ipynb
Brook1711/openda1
Table of Contents 1&nbsp;&nbsp;Initialization1.1&nbsp;&nbsp;1 - Neural Network model1.2&nbsp;&nbsp;2 - Zero initialization1.3&nbsp;&nbsp;3 - Random initialization1.4&nbsp;&nbsp;4 - He initialization1.5&nbsp;&nbsp;5 - Conclusions InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
import numpy as np import matplotlib.pyplot as plt import sklearn import sklearn.datasets from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # load image dataset: blue/red dots in circles train_X, train_Y, test_X, test_Y = load_dataset()
_____no_output_____
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"): """ Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (2, number of examples) Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples) learning_rate -- learning rate for gradient descent num_iterations -- number of iterations to run gradient descent print_cost -- if True, print the cost every 1000 iterations initialization -- flag to choose which initialization to use ("zeros","random" or "he") Returns: parameters -- parameters learnt by the model """ grads = {} costs = [] # to keep track of the loss m = X.shape[1] # number of examples layers_dims = [X.shape[0], 10, 5, 1] # Initialize parameters dictionary. if initialization == "zeros": parameters = initialize_parameters_zeros(layers_dims) elif initialization == "random": parameters = initialize_parameters_random(layers_dims) elif initialization == "he": parameters = initialize_parameters_he(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID. a3, cache = forward_propagation(X, parameters) # Loss cost = compute_loss(a3, Y) # Backward propagation. grads = backward_propagation(X, Y, cache) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the loss every 1000 iterations if print_cost and i % 1000 == 0: print("Cost after iteration {}: {}".format(i, cost)) costs.append(cost) # plot the loss plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters
_____no_output_____
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
# GRADED FUNCTION: initialize_parameters_zeros def initialize_parameters_zeros(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ parameters = {} L = len(layers_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1])) parameters['b' + str(l)] = np.zeros((layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_zeros([3,2,1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
W1 = [[0. 0. 0.] [0. 0. 0.]] b1 = [[0.] [0.]] W2 = [[0. 0.]] b2 = [[0.]]
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
parameters = model(train_X, train_Y, initialization = "zeros") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters)
Cost after iteration 0: 0.6931471805599453 Cost after iteration 1000: 0.6931471805599453 Cost after iteration 2000: 0.6931471805599453 Cost after iteration 3000: 0.6931471805599453 Cost after iteration 4000: 0.6931471805599453 Cost after iteration 5000: 0.6931471805599453 Cost after iteration 6000: 0.6931471805599453 Cost after iteration 7000: 0.6931471805599453 Cost after iteration 8000: 0.6931471805599453 Cost after iteration 9000: 0.6931471805599453 Cost after iteration 10000: 0.6931471805599455 Cost after iteration 11000: 0.6931471805599453 Cost after iteration 12000: 0.6931471805599453 Cost after iteration 13000: 0.6931471805599453 Cost after iteration 14000: 0.6931471805599453
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
print ("predictions_train = " + str(predictions_train)) print ("predictions_test = " + str(predictions_test)) plt.title("Model with Zeros initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
_____no_output_____
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
# GRADED FUNCTION: initialize_parameters_random def initialize_parameters_random(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours parameters = {} L = len(layers_dims) # integer representing the number of layers for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10 parameters['b' + str(l)] = np.zeros((layers_dims[l],1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_random([3, 2, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
W1 = [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] b1 = [[0.] [0.]] W2 = [[-0.82741481 -6.27000677]] b2 = [[0.]]
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
parameters = model(train_X, train_Y, initialization = "random") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters)
Cost after iteration 0: inf Cost after iteration 1000: 0.6250884962121392
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
print (predictions_train) print (predictions_test) plt.title("Model with large random initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
_____no_output_____
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
# GRADED FUNCTION: initialize_parameters_he def initialize_parameters_he(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) parameters = {} L = len(layers_dims) - 1 # integer representing the number of layers for l in range(1, L + 1): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * (np.sqrt(2. / layers_dims[l-1])) parameters['b' + str(l)] = np.zeros((layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_he([2, 4, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
W1 = [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] b1 = [[0.] [0.] [0.] [0.]] W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] b2 = [[0.]]
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
parameters = model(train_X, train_Y, initialization = "he") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) plt.title("Model with He initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
_____no_output_____
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
**Observations**:- The model with He initialization separates the blue and the red dots very well in a small number of iterations. 5 - Conclusions You have seen three different types of initializations. For the same number of iterations and same hyperparameters the comparison is: **Model** **Train accuracy** **Problem/Comment** 3-layer NN with zeros initialization 50% fails to break symmetry 3-layer NN with large random initialization 83% too large weights 3-layer NN with He initialization 99% recommended method **What you should remember from this notebook**:- Different initializations lead to different results- Random initialization is used to break symmetry and make sure different hidden units can learn different things- Don't intialize to values that are too large- He initialization works well for networks with ReLU activations.
%load_ext version_information %version_information numpy, matplotlib, sklearn
_____no_output_____
MIT
02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Initialization/Initialization.ipynb
soltaniehha/deep-learning-specialization-coursera
3D volumetric rendering with NeRF**Authors:** [Aritra Roy Gosthipaty](https://twitter.com/arig23498), [Ritwik Raha](https://twitter.com/ritwik_raha)**Date created:** 2021/08/09**Last modified:** 2021/08/09**Description:** Minimal implementation of volumetric rendering as shown in NeRF. IntroductionIn this example, we present a minimal implementation of the research paper[**NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis**](https://arxiv.org/abs/2003.08934)by Ben Mildenhall et. al. The authors have proposed an ingenious wayto *synthesize novel views of a scene* by modelling the *volumetricscene function* through a neural network.To help you understand this intuitively, let's start with the following question:*would it be possible to give to a neuralnetwork the position of a pixel in an image, and ask the networkto predict the color at that position?*| ![2d-train](https://i.imgur.com/DQM92vN.png) || :---: || **Figure 1**: A neural network being given coordinates of an imageas input and asked to predict the color at the coordinates. |The neural network would hypothetically *memorize* (overfit on) theimage. This means that our neural network would have encoded the entire imagein its weights. We could query the neural network with each position,and it would eventually reconstruct the entire image.| ![2d-test](https://i.imgur.com/6Qz5Hp1.png) || :---: || **Figure 2**: The trained neural network recreates the image from scratch. |A question now arises, how do we extend this idea to learn a 3Dvolumetric scene? Implementing a similar process as above wouldrequire the knowledge of every voxel (volume pixel). Turns out, thisis quite a challenging task to do.The authors of the paper propose a minimal and elegant way to learn a3D scene using a few images of the scene. They discard the use ofvoxels for training. The network learns to model the volumetric scene,thus generating novel views (images) of the 3D scene that the modelwas not shown at training time.There are a few prerequisites one needs to understand to fullyappreciate the process. We structure the example in such a way thatyou will have all the required knowledge before starting theimplementation. Setup
# Setting random seed to obtain reproducible results. import tensorflow as tf tf.random.set_seed(42) import os import glob import imageio import numpy as np from tqdm import tqdm from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt # Initialize global variables. AUTO = tf.data.AUTOTUNE BATCH_SIZE = 5 NUM_SAMPLES = 32 POS_ENCODE_DIMS = 16 EPOCHS = 20
_____no_output_____
Apache-2.0
examples/vision/ipynb/nerf.ipynb
k-w-w/keras-io
Download and load the dataThe `npz` data file contains images, camera poses, and a focal length.The images are taken from multiple camera angles as shown in**Figure 3**.| ![camera-angles](https://i.imgur.com/FLsi2is.png) || :---: || **Figure 3**: Multiple camera angles [Source: NeRF](https://arxiv.org/abs/2003.08934) |To understand camera poses in this context we have to first allowourselves to think that a *camera is a mapping between the real-worldand the 2-D image*.| ![mapping](https://www.mathworks.com/help/vision/ug/calibration_coordinate_blocks.png) || :---: || **Figure 4**: 3-D world to 2-D image mapping through a camera [Source: Mathworks](https://www.mathworks.com/help/vision/ug/camera-calibration.html) |Consider the following equation:Where **x** is the 2-D image point, **X** is the 3-D world point and**P** is the camera-matrix. **P** is a 3 x 4 matrix that plays thecrucial role of mapping the real world object onto an image plane.The camera-matrix is an *affine transform matrix* that isconcatenated with a 3 x 1 column `[image height, image width, focal length]`to produce the *pose matrix*. This matrix is ofdimensions 3 x 5 where the first 3 x 3 block is in the camera’s pointof view. The axes are `[down, right, backwards]` or `[-y, x, z]`where the camera is facing forwards `-z`.| ![camera-mapping](https://i.imgur.com/kvjqbiO.png) || :---: || **Figure 5**: The affine transformation. |The COLMAP frame is `[right, down, forwards]` or `[x, -y, -z]`. Readmore about COLMAP [here](https://colmap.github.io/).
# Download the data if it does not already exist. file_name = "tiny_nerf_data.npz" url = "https://people.eecs.berkeley.edu/~bmild/nerf/tiny_nerf_data.npz" if not os.path.exists(file_name): data = keras.utils.get_file(fname=file_name, origin=url) data = np.load(data) images = data["images"] im_shape = images.shape (num_images, H, W, _) = images.shape (poses, focal) = (data["poses"], data["focal"]) # Plot a random image from the dataset for visualization. plt.imshow(images[np.random.randint(low=0, high=num_images)]) plt.show()
_____no_output_____
Apache-2.0
examples/vision/ipynb/nerf.ipynb
k-w-w/keras-io
Data pipelineNow that you've understood the notion of camera matrixand the mapping from a 3D scene to 2D images,let's talk about the inverse mapping, i.e. from 2D image to the 3D scene.We'll need to talk about volumetric rendering with ray casting and tracing,which are common computer graphics techniques.This section will help you get to speed with these techniques.Consider an image with `N` pixels. We shoot a ray through each pixeland sample some points on the ray. A ray is commonly parameterized bythe equation `r(t) = o + td` where `t` is the parameter, `o` is theorigin and `d` is the unit directional vector as shown in **Figure 6**.| ![img](https://i.imgur.com/ywrqlzt.gif) || :---: || **Figure 6**: `r(t) = o + td` where t is 3 |In **Figure 7**, we consider a ray, and we sample some random points onthe ray. These sample points each have a unique location `(x, y, z)`and the ray has a viewing angle `(theta, phi)`. The viewing angle isparticularly interesting as we can shoot a ray through a single pixelin a lot of different ways, each with a unique viewing angle. Anotherinteresting thing to notice here is the noise that is added to thesampling process. We add a uniform noise to each sample so that thesamples correspond to a continuous distribution. In **Figure 7** theblue points are the evenly distributed samples and the white points`(t1, t2, t3)` are randomly placed between the samples.| ![img](https://i.imgur.com/r9TS2wv.gif) || :---: || **Figure 7**: Sampling the points from a ray. |**Figure 8** showcases the entire sampling process in 3D, where youcan see the rays coming out of the white image. This means that eachpixel will have its corresponding rays and each ray will be sampled atdistinct points.| ![3-d rays](https://i.imgur.com/hr4D2g2.gif) || :---: || **Figure 8**: Shooting rays from all the pixels of an image in 3-D |These sampled points act as the input to the NeRF model. The model isthen asked to predict the RGB color and the volume density at thatpoint.| ![3-Drender](https://i.imgur.com/HHb6tlQ.png) || :---: || **Figure 9**: Data pipeline [Source: NeRF](https://arxiv.org/abs/2003.08934) |
def encode_position(x): """Encodes the position into its corresponding Fourier feature. Args: x: The input coordinate. Returns: Fourier features tensors of the position. """ positions = [x] for i in range(POS_ENCODE_DIMS): for fn in [tf.sin, tf.cos]: positions.append(fn(2.0 ** i * x)) return tf.concat(positions, axis=-1) def get_rays(height, width, focal, pose): """Computes origin point and direction vector of rays. Args: height: Height of the image. width: Width of the image. focal: The focal length between the images and the camera. pose: The pose matrix of the camera. Returns: Tuple of origin point and direction vector for rays. """ # Build a meshgrid for the rays. i, j = tf.meshgrid( tf.range(width, dtype=tf.float32), tf.range(height, dtype=tf.float32), indexing="xy", ) # Normalize the x axis coordinates. transformed_i = (i - width * 0.5) / focal # Normalize the y axis coordinates. transformed_j = (j - height * 0.5) / focal # Create the direction unit vectors. directions = tf.stack([transformed_i, -transformed_j, -tf.ones_like(i)], axis=-1) # Get the camera matrix. camera_matrix = pose[:3, :3] height_width_focal = pose[:3, -1] # Get origins and directions for the rays. transformed_dirs = directions[..., None, :] camera_dirs = transformed_dirs * camera_matrix ray_directions = tf.reduce_sum(camera_dirs, axis=-1) ray_origins = tf.broadcast_to(height_width_focal, tf.shape(ray_directions)) # Return the origins and directions. return (ray_origins, ray_directions) def render_flat_rays(ray_origins, ray_directions, near, far, num_samples, rand=False): """Renders the rays and flattens it. Args: ray_origins: The origin points for rays. ray_directions: The direction unit vectors for the rays. near: The near bound of the volumetric scene. far: The far bound of the volumetric scene. num_samples: Number of sample points in a ray. rand: Choice for randomising the sampling strategy. Returns: Tuple of flattened rays and sample points on each rays. """ # Compute 3D query points. # Equation: r(t) = o+td -> Building the "t" here. t_vals = tf.linspace(near, far, num_samples) if rand: # Inject uniform noise into sample space to make the sampling # continuous. shape = list(ray_origins.shape[:-1]) + [num_samples] noise = tf.random.uniform(shape=shape) * (far - near) / num_samples t_vals = t_vals + noise # Equation: r(t) = o + td -> Building the "r" here. rays = ray_origins[..., None, :] + ( ray_directions[..., None, :] * t_vals[..., None] ) rays_flat = tf.reshape(rays, [-1, 3]) rays_flat = encode_position(rays_flat) return (rays_flat, t_vals) def map_fn(pose): """Maps individual pose to flattened rays and sample points. Args: pose: The pose matrix of the camera. Returns: Tuple of flattened rays and sample points corresponding to the camera pose. """ (ray_origins, ray_directions) = get_rays(height=H, width=W, focal=focal, pose=pose) (rays_flat, t_vals) = render_flat_rays( ray_origins=ray_origins, ray_directions=ray_directions, near=2.0, far=6.0, num_samples=NUM_SAMPLES, rand=True, ) return (rays_flat, t_vals) # Create the training split. split_index = int(num_images * 0.8) # Split the images into training and validation. train_images = images[:split_index] val_images = images[split_index:] # Split the poses into training and validation. train_poses = poses[:split_index] val_poses = poses[split_index:] # Make the training pipeline. train_img_ds = tf.data.Dataset.from_tensor_slices(train_images) train_pose_ds = tf.data.Dataset.from_tensor_slices(train_poses) train_ray_ds = train_pose_ds.map(map_fn, num_parallel_calls=AUTO) training_ds = tf.data.Dataset.zip((train_img_ds, train_ray_ds)) train_ds = ( training_ds.shuffle(BATCH_SIZE) .batch(BATCH_SIZE, drop_remainder=True, num_parallel_calls=AUTO) .prefetch(AUTO) ) # Make the validation pipeline. val_img_ds = tf.data.Dataset.from_tensor_slices(val_images) val_pose_ds = tf.data.Dataset.from_tensor_slices(val_poses) val_ray_ds = val_pose_ds.map(map_fn, num_parallel_calls=AUTO) validation_ds = tf.data.Dataset.zip((val_img_ds, val_ray_ds)) val_ds = ( validation_ds.shuffle(BATCH_SIZE) .batch(BATCH_SIZE, drop_remainder=True, num_parallel_calls=AUTO) .prefetch(AUTO) )
_____no_output_____
Apache-2.0
examples/vision/ipynb/nerf.ipynb
k-w-w/keras-io
NeRF modelThe model is a multi-layer perceptron (MLP), with ReLU as its non-linearity.An excerpt from the paper:*"We encourage the representation to be multiview-consistent byrestricting the network to predict the volume density sigma as afunction of only the location `x`, while allowing the RGB color `c` to bepredicted as a function of both location and viewing direction. Toaccomplish this, the MLP first processes the input 3D coordinate `x`with 8 fully-connected layers (using ReLU activations and 256 channelsper layer), and outputs sigma and a 256-dimensional feature vector.This feature vector is then concatenated with the camera ray's viewingdirection and passed to one additional fully-connected layer (using aReLU activation and 128 channels) that output the view-dependent RGBcolor."*Here we have gone for a minimal implementation and have used 64Dense units instead of 256 as mentioned in the paper.
def get_nerf_model(num_layers, num_pos): """Generates the NeRF neural network. Args: num_layers: The number of MLP layers. num_pos: The number of dimensions of positional encoding. Returns: The `tf.keras` model. """ inputs = keras.Input(shape=(num_pos, 2 * 3 * POS_ENCODE_DIMS + 3)) x = inputs for i in range(num_layers): x = layers.Dense(units=64, activation="relu")(x) if i % 4 == 0 and i > 0: # Inject residual connection. x = layers.concatenate([x, inputs], axis=-1) outputs = layers.Dense(units=4)(x) return keras.Model(inputs=inputs, outputs=outputs) def render_rgb_depth(model, rays_flat, t_vals, rand=True, train=True): """Generates the RGB image and depth map from model prediction. Args: model: The MLP model that is trained to predict the rgb and volume density of the volumetric scene. rays_flat: The flattened rays that serve as the input to the NeRF model. t_vals: The sample points for the rays. rand: Choice to randomise the sampling strategy. train: Whether the model is in the training or testing phase. Returns: Tuple of rgb image and depth map. """ # Get the predictions from the nerf model and reshape it. if train: predictions = model(rays_flat) else: predictions = model.predict(rays_flat) predictions = tf.reshape(predictions, shape=(BATCH_SIZE, H, W, NUM_SAMPLES, 4)) # Slice the predictions into rgb and sigma. rgb = tf.sigmoid(predictions[..., :-1]) sigma_a = tf.nn.relu(predictions[..., -1]) # Get the distance of adjacent intervals. delta = t_vals[..., 1:] - t_vals[..., :-1] # delta shape = (num_samples) if rand: delta = tf.concat( [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, H, W, 1))], axis=-1 ) alpha = 1.0 - tf.exp(-sigma_a * delta) else: delta = tf.concat( [delta, tf.broadcast_to([1e10], shape=(BATCH_SIZE, 1))], axis=-1 ) alpha = 1.0 - tf.exp(-sigma_a * delta[:, None, None, :]) # Get transmittance. exp_term = 1.0 - alpha epsilon = 1e-10 transmittance = tf.math.cumprod(exp_term + epsilon, axis=-1, exclusive=True) weights = alpha * transmittance rgb = tf.reduce_sum(weights[..., None] * rgb, axis=-2) if rand: depth_map = tf.reduce_sum(weights * t_vals, axis=-1) else: depth_map = tf.reduce_sum(weights * t_vals[:, None, None], axis=-1) return (rgb, depth_map)
_____no_output_____
Apache-2.0
examples/vision/ipynb/nerf.ipynb
k-w-w/keras-io
TrainingThe training step is implemented as part of a custom `keras.Model` subclassso that we can make use of the `model.fit` functionality.
class NeRF(keras.Model): def __init__(self, nerf_model): super().__init__() self.nerf_model = nerf_model def compile(self, optimizer, loss_fn): super().compile() self.optimizer = optimizer self.loss_fn = loss_fn self.loss_tracker = keras.metrics.Mean(name="loss") self.psnr_metric = keras.metrics.Mean(name="psnr") def train_step(self, inputs): # Get the images and the rays. (images, rays) = inputs (rays_flat, t_vals) = rays with tf.GradientTape() as tape: # Get the predictions from the model. rgb, _ = render_rgb_depth( model=self.nerf_model, rays_flat=rays_flat, t_vals=t_vals, rand=True ) loss = self.loss_fn(images, rgb) # Get the trainable variables. trainable_variables = self.nerf_model.trainable_variables # Get the gradeints of the trainiable variables with respect to the loss. gradients = tape.gradient(loss, trainable_variables) # Apply the grads and optimize the model. self.optimizer.apply_gradients(zip(gradients, trainable_variables)) # Get the PSNR of the reconstructed images and the source images. psnr = tf.image.psnr(images, rgb, max_val=1.0) # Compute our own metrics self.loss_tracker.update_state(loss) self.psnr_metric.update_state(psnr) return {"loss": self.loss_tracker.result(), "psnr": self.psnr_metric.result()} def test_step(self, inputs): # Get the images and the rays. (images, rays) = inputs (rays_flat, t_vals) = rays # Get the predictions from the model. rgb, _ = render_rgb_depth( model=self.nerf_model, rays_flat=rays_flat, t_vals=t_vals, rand=True ) loss = self.loss_fn(images, rgb) # Get the PSNR of the reconstructed images and the source images. psnr = tf.image.psnr(images, rgb, max_val=1.0) # Compute our own metrics self.loss_tracker.update_state(loss) self.psnr_metric.update_state(psnr) return {"loss": self.loss_tracker.result(), "psnr": self.psnr_metric.result()} @property def metrics(self): return [self.loss_tracker, self.psnr_metric] test_imgs, test_rays = next(iter(train_ds)) test_rays_flat, test_t_vals = test_rays loss_list = [] class TrainMonitor(keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): loss = logs["loss"] loss_list.append(loss) test_recons_images, depth_maps = render_rgb_depth( model=self.model.nerf_model, rays_flat=test_rays_flat, t_vals=test_t_vals, rand=True, train=False, ) # Plot the rgb, depth and the loss plot. fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(20, 5)) ax[0].imshow(keras.preprocessing.image.array_to_img(test_recons_images[0])) ax[0].set_title(f"Predicted Image: {epoch:03d}") ax[1].imshow(keras.preprocessing.image.array_to_img(depth_maps[0, ..., None])) ax[1].set_title(f"Depth Map: {epoch:03d}") ax[2].plot(loss_list) ax[2].set_xticks(np.arange(0, EPOCHS + 1, 5.0)) ax[2].set_title(f"Loss Plot: {epoch:03d}") fig.savefig(f"images/{epoch:03d}.png") plt.show() plt.close() num_pos = H * W * NUM_SAMPLES nerf_model = get_nerf_model(num_layers=8, num_pos=num_pos) model = NeRF(nerf_model) model.compile( optimizer=keras.optimizers.Adam(), loss_fn=keras.losses.MeanSquaredError() ) # Create a directory to save the images during training. if not os.path.exists("images"): os.makedirs("images") model.fit( train_ds, validation_data=val_ds, batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[TrainMonitor()], steps_per_epoch=split_index // BATCH_SIZE, ) def create_gif(path_to_images, name_gif): filenames = glob.glob(path_to_images) filenames = sorted(filenames) images = [] for filename in tqdm(filenames): images.append(imageio.imread(filename)) kargs = {"duration": 0.25} imageio.mimsave(name_gif, images, "GIF", **kargs) create_gif("images/*.png", "training.gif")
_____no_output_____
Apache-2.0
examples/vision/ipynb/nerf.ipynb
k-w-w/keras-io
Visualize the training stepHere we see the training step. With the decreasing loss, the renderedimage and the depth maps are getting better. In your local system, youwill see the `training.gif` file generated.![training-20](https://i.imgur.com/ql5OcYA.gif) InferenceIn this section, we ask the model to build novel views of the scene.The model was given `106` views of the scene in the training step. Thecollections of training images cannot contain each and every angle ofthe scene. A trained model can represent the entire 3-D scene with asparse set of training images.Here we provide different poses to the model and ask for it to give usthe 2-D image corresponding to that camera view. If we infer the modelfor all the 360-degree views, it should provide an overview of theentire scenery from all around.
# Get the trained NeRF model and infer. nerf_model = model.nerf_model test_recons_images, depth_maps = render_rgb_depth( model=nerf_model, rays_flat=test_rays_flat, t_vals=test_t_vals, rand=True, train=False, ) # Create subplots. fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(10, 20)) for ax, ori_img, recons_img, depth_map in zip( axes, test_imgs, test_recons_images, depth_maps ): ax[0].imshow(keras.preprocessing.image.array_to_img(ori_img)) ax[0].set_title("Original") ax[1].imshow(keras.preprocessing.image.array_to_img(recons_img)) ax[1].set_title("Reconstructed") ax[2].imshow( keras.preprocessing.image.array_to_img(depth_map[..., None]), cmap="inferno" ) ax[2].set_title("Depth Map")
_____no_output_____
Apache-2.0
examples/vision/ipynb/nerf.ipynb
k-w-w/keras-io
Render 3D SceneHere we will synthesize novel 3D views and stitch all of them togetherto render a video encompassing the 360-degree view.
def get_translation_t(t): """Get the translation matrix for movement in t.""" matrix = [ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, t], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def get_rotation_phi(phi): """Get the rotation matrix for movement in phi.""" matrix = [ [1, 0, 0, 0], [0, tf.cos(phi), -tf.sin(phi), 0], [0, tf.sin(phi), tf.cos(phi), 0], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def get_rotation_theta(theta): """Get the rotation matrix for movement in theta.""" matrix = [ [tf.cos(theta), 0, -tf.sin(theta), 0], [0, 1, 0, 0], [tf.sin(theta), 0, tf.cos(theta), 0], [0, 0, 0, 1], ] return tf.convert_to_tensor(matrix, dtype=tf.float32) def pose_spherical(theta, phi, t): """ Get the camera to world matrix for the corresponding theta, phi and t. """ c2w = get_translation_t(t) c2w = get_rotation_phi(phi / 180.0 * np.pi) @ c2w c2w = get_rotation_theta(theta / 180.0 * np.pi) @ c2w c2w = np.array([[-1, 0, 0, 0], [0, 0, 1, 0], [0, 1, 0, 0], [0, 0, 0, 1]]) @ c2w return c2w rgb_frames = [] batch_flat = [] batch_t = [] # Iterate over different theta value and generate scenes. for index, theta in tqdm(enumerate(np.linspace(0.0, 360.0, 120, endpoint=False))): # Get the camera to world matrix. c2w = pose_spherical(theta, -30.0, 4.0) # ray_oris, ray_dirs = get_rays(H, W, focal, c2w) rays_flat, t_vals = render_flat_rays( ray_oris, ray_dirs, near=2.0, far=6.0, num_samples=NUM_SAMPLES, rand=False ) if index % BATCH_SIZE == 0 and index > 0: batched_flat = tf.stack(batch_flat, axis=0) batch_flat = [rays_flat] batched_t = tf.stack(batch_t, axis=0) batch_t = [t_vals] rgb, _ = render_rgb_depth( nerf_model, batched_flat, batched_t, rand=False, train=False ) temp_rgb = [np.clip(255 * img, 0.0, 255.0).astype(np.uint8) for img in rgb] rgb_frames = rgb_frames + temp_rgb else: batch_flat.append(rays_flat) batch_t.append(t_vals) rgb_video = "rgb_video.mp4" imageio.mimwrite(rgb_video, rgb_frames, fps=30, quality=7, macro_block_size=None)
_____no_output_____
Apache-2.0
examples/vision/ipynb/nerf.ipynb
k-w-w/keras-io
CellsIn this examples I'll show how to render a large number of cells in your scene. This can be useful when visualizing the results of tracking experiments after they have been aligned to the allen brain atlas reference frame. set up
# We begin by adding the current path to sys.path to make sure that the imports work correctly import sys sys.path.append('../') import os import pandas as pd from vtkplotter import * # Import variables from brainrender import * # <- these can be changed to personalize the look of your renders # Import brainrender classes and useful functions from brainrender.scene import Scene from brainrender.Utils.data_io import listdir
_____no_output_____
MIT
Examples/notebooks/Cells.ipynb
paulbrodersen/BrainRender
Get dataTo keep things interesting, we will generate N random "cells" in a number of regions of interest. These coordinates will then be used to render the cells. If you have your coordinates saved in a file (e.g. a .csv or .h5 or .pkl), you can use `Scene.add_cells_from_file` and skip this next step.
# Create a scene scene = Scene() # Define in which regions to crate the cells and how many regions = ["MOs", "VISp", "ZI"] N = 1000 # getting 1k cells per region, but brainrender can deal with >1M cells easily. # Render brain regions and add transparency slider. scene.add_brain_regions(regions, colors="ivory", alpha=.8) scene.add_slider(brain_regions=regions) print("\nRunning a quick experiment to get cell coordinates...") cells = [] # to store x,y,z coordinates for region in regions: region_cells = scene.get_n_random_points_in_region(region=region, N=N) cells.extend(region_cells) x,y,z = [c[0] for c in cells], [c[1] for c in cells], [c[2] for c in cells] cells = pd.DataFrame(dict(x=x, y=y, z=z)) # render cells print("\nRendering...") scene.add_cells(cells, color="red") scene.render() # <- this wont actually render things in a notebook vp = Plotter(axes=0) vp.show(scene.get_actors())
_____no_output_____
MIT
Examples/notebooks/Cells.ipynb
paulbrodersen/BrainRender
Load the data
STATSBOMB = os.path.join('..', '..', 'data', 'statsbomb') df_statsbomb_event = pd.read_parquet(os.path.join(STATSBOMB, 'event.parquet')) df_statsbomb_freeze = pd.read_parquet(os.path.join(STATSBOMB, 'freeze.parquet'))
_____no_output_____
MIT
notebooks/create-data/04_statsbomb_freeze_frame_features.ipynb
andrewRowlinson/expected-goals-thesis
Filter shots
df_statsbomb_shot = df_statsbomb_event[df_statsbomb_event.type_name == 'Shot'].copy()
_____no_output_____
MIT
notebooks/create-data/04_statsbomb_freeze_frame_features.ipynb
andrewRowlinson/expected-goals-thesis
Features based on StatsBomb freeze frame Features based on freeze frame - this takes a while as looping over 20k+ shots:- space around goaly- space around shooter- number of players in shot angle to goal Filter out penalty goals from freeze frames
non_penalty_id = df_statsbomb_shot.loc[(df_statsbomb_shot.sub_type_name != 'Penalty'), 'id'] df_statsbomb_freeze = df_statsbomb_freeze[df_statsbomb_freeze.id.isin(non_penalty_id)].copy()
_____no_output_____
MIT
notebooks/create-data/04_statsbomb_freeze_frame_features.ipynb
andrewRowlinson/expected-goals-thesis
Add the shot taker to the freeze frame
cols_to_keep = ['id','player_id','player_name','position_id','position_name','x','y','match_id'] freeze_ids = df_statsbomb_freeze.id.unique() df_shot_taker = df_statsbomb_shot.loc[df_statsbomb_shot.id.isin(freeze_ids), cols_to_keep].copy() df_shot_taker['player_teammate'] = True df_shot_taker['event_freeze_id'] = 0 df_shot_taker.rename({'position_id': 'player_position_id', 'position_name': 'player_position_name'}, axis=1, inplace=True) df_statsbomb_freeze = pd.concat([df_statsbomb_freeze, df_shot_taker])
_____no_output_____
MIT
notebooks/create-data/04_statsbomb_freeze_frame_features.ipynb
andrewRowlinson/expected-goals-thesis
Calculate features
statsbomb_pitch = Pitch() # store the results in lists area_goal = [] area_shot = [] n_angle = [] # loop through the freeze frames create a voronoi and calculate the area around the goalkeeper/ shot taker for shot_id in df_statsbomb_freeze.id.unique(): subset = df_statsbomb_freeze.loc[df_statsbomb_freeze.id == shot_id, ['x', 'y', 'player_teammate', 'event_freeze_id', 'player_position_id','player_position_name']].copy() team1, team2 = statsbomb_pitch.voronoi(subset.x, subset.y, subset.player_teammate) subset['rank'] = subset.groupby('player_teammate')['x'].cumcount() # goal keeper voronoi if (subset.player_position_name=='Goalkeeper').sum() > 0: goalkeeper_voronoi = team2[subset.loc[subset.player_position_id == 1, 'rank'].values[0]] area_goal.append(Polygon(goalkeeper_voronoi).area) else: area_goal.append(0) # shot voronoi shot_taker_voronoi = team1[subset.loc[subset.event_freeze_id == 0, 'rank'].values[0]] area_shot.append(Polygon(shot_taker_voronoi).area) # calculate number of players in the angle to the goal shot_taker = subset.loc[subset.event_freeze_id == 0, ['x', 'y']] verts = np.zeros((3, 2)) verts[0, 0] = shot_taker.x verts[0, 1] = shot_taker.y verts[1:, :] = statsbomb_pitch.goal_right angle = Polygon(verts).buffer(0) # the angle to the goal polygon, buffer added as sometimes shot is on the goal line players = MultiPoint(subset.loc[subset.event_freeze_id!=0, ['x', 'y']].values.tolist()) # points for players intersection = players.intersection(angle) # intersection between angle and players if isinstance(intersection, MultiPoint): # calculate number of players n_players = len(players.intersection(angle)) elif isinstance(intersection, Point): n_players = 1 else: n_players = 0 n_angle.append(n_players) # create a dataframe df_freeze_features = pd.DataFrame({'id': df_statsbomb_freeze.id.unique(), 'area_shot': area_shot, 'area_goal': area_goal, 'n_angle': n_angle})
_____no_output_____
MIT
notebooks/create-data/04_statsbomb_freeze_frame_features.ipynb
andrewRowlinson/expected-goals-thesis
Add on goalkeeper position
gk_position = df_statsbomb_freeze.loc[(df_statsbomb_freeze.player_position_name == 'Goalkeeper') & (df_statsbomb_freeze.player_teammate == False), ['id', 'x', 'y']] gk_position.rename({'x': 'goalkeeper_x','y': 'goalkeeper_y'}, axis=1, inplace=True) df_freeze_features = df_freeze_features.merge(gk_position, how='left', on='id', validate='1:1')
_____no_output_____
MIT
notebooks/create-data/04_statsbomb_freeze_frame_features.ipynb
andrewRowlinson/expected-goals-thesis
Save features
df_freeze_features.to_parquet(os.path.join(STATSBOMB, 'freeze_features.parquet')) df_freeze_features.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 21536 entries, 0 to 21535 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 21536 non-null object 1 area_shot 21536 non-null float64 2 area_goal 21536 non-null float64 3 n_angle 21536 non-null int64 4 goalkeeper_x 21477 non-null float64 5 goalkeeper_y 21477 non-null float64 dtypes: float64(4), int64(1), object(1) memory usage: 1.2+ MB
MIT
notebooks/create-data/04_statsbomb_freeze_frame_features.ipynb
andrewRowlinson/expected-goals-thesis
Simple Widget Introduction What are widgets? Widgets are elements that exists in both the front-end and the back-end.![Kernel & front-end diagram](images/FrontendKernel.png) What can they be used for? You can use widgets to build **interactive GUIs** for your notebooks. You can also use widgets to **synchronize stateful and stateless information** between Python and JavaScript. Using widgets To use the widget framework, you need to **import `IPython.html.widgets`**.
from ipywidgets import *
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
repr Widgets have their own display `repr` which allows them to be displayed using IPython's display framework. Constructing and returning an `IntSlider` automatically displays the widget (as seen below). Widgets are **displayed inside the `widget area`**, which sits between the code cell and output. **You can hide all of the widgets** in the `widget area` by clicking the grey *x* in the margin.
IntSlider(min=0, max=10)
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
display() You can also explicitly display the widget using `display(...)`.
from IPython.display import display w = IntSlider(min=0, max=10) display(w)
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
Multiple display() calls If you display the same widget twice, the displayed instances in the front-end **will remain in sync** with each other.
display(w)
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
Why does displaying the same widget twice work? Widgets are **represented in the back-end by a single object**. Each time a widget is displayed, **a new representation** of that same object is created in the front-end. These representations are called **views**.![Kernel & front-end diagram](images/WidgetModelView.png) Closing widgets You can close a widget by calling its `close()` method.
display(w) w.close()
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
Widget properties All of the IPython widgets **share a similar naming scheme**. To read the value of a widget, you can query its `value` property.
w = IntSlider(min=0, max=10) display(w) w.value
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
Similarly, to set a widget's value, you can set its `value` property.
w.value = 100
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
Keys In addition to `value`, most widgets share `keys`, `description`, `disabled`, and `visible`. To see the entire list of synchronized, stateful properties, of any specific widget, you can **query the `keys` property**.
w.keys
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
Shorthand for setting the initial values of widget properties While creating a widget, you can set some or all of the initial values of that widget by **defining them as keyword arguments in the widget's constructor** (as seen below).
Text(value='Hello World!', disabled=True)
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
Linking two similar widgets If you need to display the same value two different ways, you'll have to use two different widgets. Instead of **attempting to manually synchronize the values** of the two widgets, you can use the `traitlet` `link` function **to link two properties together**. Below, the values of three widgets are linked together.
from traitlets import link a = FloatText() b = FloatSlider(min=0.0, max=10.0) c = FloatProgress(min=0, max=10) display(a,b,c) blink = link((a, 'value'), (b, 'value')) clink = link((a, 'value'), (c, 'value')) a.value = 5
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
Unlinking widgets Unlinking the widgets is simple. All you have to do is call `.unlink` on the link object.
clink.unlink()
_____no_output_____
Apache-2.0
intro_python/python_tutorials/jupyter-notebook_intro/Widget Basics.ipynb
cgentemann/tutorials
Import
# Matplotlib import matplotlib.pyplot as plt # Tensorflow import tensorflow as tf # Numpy and Pandas import numpy as np import pandas as pd # Ohter import import sys from sklearn.preprocessing import StandardScaler
Limited tf.compat.v2.summary API due to missing TensorBoard installation Limited tf.summary API due to missing TensorBoard installation
MIT
Neural Network Error-Function.ipynb
Haytam222/Neural-Network
Be sure to used Tensorflow 2.0
assert hasattr(tf, "function") # Be sure to use tensorflow 2.0
_____no_output_____
MIT
Neural Network Error-Function.ipynb
Haytam222/Neural-Network
Load the dataset: Fashion MNIST ![](https://storage.googleapis.com/kaggle-datasets-images/2243/3791/9384af51de8baa77f6320901f53bd26b/dataset-cover.png)
# Fashio MNIST fashion_mnist = tf.keras.datasets.fashion_mnist (images, targets), (_, _) = fashion_mnist.load_data() # Get only a subpart of the dataset # Get only a subpart images = images[:10000] targets = targets [:10000] images = images.reshape(-1, 784) images = images.astype(float) scaler = StandardScaler() images = scaler.fit_transform(images) print(images.shape) print(targets.shape)
(10000, 784) (10000,)
MIT
Neural Network Error-Function.ipynb
Haytam222/Neural-Network
Plot one of the data
targets_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot" ] # Plot one image plt.imshow(images[10].reshape(28, 28), cmap="binary") #plt.title(targets_names[targets[10]]) plt.title(targets_names[targets[10]]) plt.show() #print("First line of one image", images[11][0]) print("First line of one image", images[11]) print("Associated target", targets[11])
First line of one image [-0.01426971 -0.02645579 -0.029489 -0.04635542 -0.06156617 -0.07641125 -0.10509579 -0.16410192 -0.23986957 -0.36929666 -0.57063232 -0.6909092 -0.7582382 -0.74450346 -0.17093142 0.80572169 0.60465021 0.69474334 0.01007169 -0.32085836 -0.20882718 -0.14379861 -0.11434416 -0.09302065 0.08584529 -0.04969764 -0.03368099 -0.01591863 -0.0181322 -0.02297209 -0.03365679 -0.05814043 -0.08287213 -0.13053264 -0.2464668 -0.35905423 -0.48335079 -0.63909239 -0.83575443 -0.98917162 -1.08347998 -1.07712864 0.7931674 1.2496451 1.35025207 1.68512162 1.97595936 1.43181167 2.97956664 4.68907298 4.88750284 0.23595608 -0.11565956 0.14562865 -0.06100298 -0.03312088 -0.01964757 -0.02793878 -0.0481181 -0.07617253 -0.12670992 -0.26684818 -0.39945708 -0.49720396 -0.62326614 -0.8195795 -0.99379417 -1.04759214 -1.10371252 -1.10221791 1.08283564 1.22639277 1.35310524 1.34234162 1.66433217 2.15862735 2.75297169 3.22113197 4.62472272 3.87445967 -0.16599094 0.32418594 -0.087733 -0.0526323 -0.02862848 -0.04242726 -0.06957184 -0.10501986 -0.21177968 -0.36570732 -0.50377706 -0.63129117 -0.7545061 -0.92782181 -1.04671762 -1.04884575 -1.10753111 -1.03315535 1.43294532 1.33033833 1.39162212 1.50249446 1.41472555 1.48664927 2.19750146 2.5207204 3.23681206 0.32439317 -0.22921786 0.08719395 -0.11524194 -0.06595022 -0.03978101 -0.06151816 -0.09394236 -0.14485093 -0.28258668 -0.45013464 -0.60762152 -0.70866125 -0.80845132 -0.97106764 -1.06309306 -1.04395211 -1.11950469 -0.35989804 1.56262616 1.30932327 1.41614857 1.49002634 1.44030778 1.4974615 2.02811047 2.22341936 2.1189369 0.28273308 0.22687411 -0.22359138 -0.07278968 -0.09631577 -0.05785819 -0.08665899 -0.12303533 -0.19276323 -0.34094366 -0.53007774 -0.6636926 -0.76166986 -0.85810993 -1.01973474 -1.10359032 -1.13389127 -1.13797187 0.19728184 1.30491585 1.12589712 1.56101992 1.5471799 1.35519155 1.61848413 1.8686844 1.86320923 0.84284685 1.09578392 0.74105846 -0.28563328 -0.1131897 -0.11759717 -0.07138681 -0.10484842 -0.15218768 -0.23983624 -0.39446008 -0.58540856 -0.70817066 -0.80613957 -0.8912735 -1.04743568 -1.11648233 -1.16203361 -1.16480491 0.86892733 1.27412159 0.8998086 0.74428789 1.13274167 1.14002008 1.64475384 1.22579108 1.87626568 0.72713619 -0.21425058 -0.44976207 -0.3588039 -0.26052139 -0.14642704 -0.09057754 -0.12852483 -0.17658578 -0.27962415 -0.43604854 -0.62328729 -0.74417079 -0.83698675 -0.91538507 -1.05836072 -1.09984451 -1.18744141 -1.19142578 1.24141786 1.39079751 1.49192297 1.27955426 1.30948745 1.17061076 0.86607308 1.27421913 0.79750725 -0.86719519 -0.69061632 -0.50423389 -0.42229875 -0.30440602 -0.16353165 -0.09817535 -0.14372941 -0.20517067 -0.30866173 -0.4655249 -0.65221334 -0.76683863 -0.85659993 -0.93256978 -1.06226401 -1.15171237 -1.21294298 -0.55403601 1.46120819 0.97836915 1.05122066 1.2521523 1.05790293 1.35951983 0.90500191 1.55701257 0.82622186 -0.93881345 -0.7662494 -0.57465574 -0.48552019 -0.34738009 -0.18855983 -0.10483514 -0.16127624 -0.22554475 -0.32839989 -0.48754623 -0.66943952 -0.77552861 -0.86498292 -0.94273549 -1.06015652 -1.18041842 -1.23791689 0.42552833 1.46179792 0.99490898 0.75506225 0.87837333 0.82699162 1.09938829 0.76830616 1.48553714 -0.13338616 0.50592885 -0.83182562 -0.65812 -0.54406795 -0.39662058 -0.21430757 -0.11419072 -0.17789518 -0.23568605 -0.33542269 -0.5026126 -0.67620553 -0.77596799 -0.86788207 -0.94980187 -1.03197874 -1.22037631 -1.30832137 1.13203817 1.20044543 1.26727922 1.22318096 1.33469514 1.2591838 1.27789102 0.95415321 1.45083593 -1.14975179 -0.0817779 1.07590662 -0.71352465 -0.61851141 -0.45102226 -0.23988228 -0.1324622 -0.1914184 -0.23850724 -0.33502594 -0.50210849 -0.67112987 -0.76673944 -0.8616405 -0.96676107 -1.0848351 -1.3330483 -0.93497502 1.45610367 1.06754889 1.26636853 1.12103986 0.83294083 1.32533583 0.96137914 0.8823002 1.43281281 1.19611371 -0.78940528 1.86544193 -0.74636813 -0.65262812 -0.50618527 -0.26376513 -0.14691646 -0.20208667 -0.24647794 -0.34047837 -0.50463299 -0.66562681 -0.76193944 -0.87453007 -1.02396861 -1.2315534 -1.51364781 -0.22477969 1.36864633 0.97874683 1.13715509 1.05688341 0.99487436 1.40832046 0.59156431 0.94867054 1.34348434 1.46512153 0.55580094 1.79155088 1.05012863 -0.67067287 -0.54930031 -0.2968015 -0.15491047 -0.21450816 -0.261535 -0.36080841 -0.53730463 -0.70325988 -0.81421065 -0.94111069 -1.08418556 -1.34365865 -1.53886075 1.09326051 0.72413821 1.27757173 1.36520155 1.17770547 1.0023395 1.39555822 0.29493432 1.10901936 1.36430898 1.27440447 1.52040376 1.40357315 1.72718391 0.1853037 -0.57266526 -0.33349732 -0.18106813 -0.27035229 -0.32539614 -0.42952929 -0.61751986 -0.7906786 -0.89035399 -0.99618473 -1.1655271 -1.43209714 -0.09950582 1.43909587 0.80004613 0.88559108 1.40804576 1.33663711 1.00766279 1.4018325 0.27208395 1.09470572 1.42729615 1.26618628 1.41174747 1.45821099 1.71015214 1.01925997 0.4601322 -0.36757044 -0.24958781 -0.3531048 -0.40241884 -0.49907564 -0.69135965 -0.85359971 -0.9331706 -1.0059672 -1.2177602 -1.54149264 1.06742005 1.19680318 1.16583857 1.04905231 0.80970041 1.20411735 1.24623527 0.93697892 0.42037146 1.01432568 1.45360261 1.25038614 1.51241082 1.47613898 0.92463771 -0.70060342 -0.62144365 -0.39567218 -0.31147884 -0.40192164 -0.45021433 -0.54773943 -0.75003079 -0.889456 -0.98063839 -1.07747814 -1.29340698 -0.73928768 1.42310729 1.18867558 1.29652988 1.37945647 1.18486113 0.53438163 0.56912652 1.05669556 0.45154219 0.81022867 1.44123053 1.22117476 1.51323768 1.10025946 -0.84443622 -0.71082151 -0.62981211 -0.41576178 -0.33145798 -0.4375847 -0.49080625 -0.59254976 -0.79668158 -0.93801891 -1.02130727 -1.11492415 -1.35022588 0.83375288 0.9741596 0.4062541 0.82345526 0.99971607 1.41325802 1.38631373 0.82115561 1.03621816 1.37633608 1.41019057 1.43307373 1.33830106 1.56303358 1.2326212 -0.83324214 -0.68996128 -0.60036851 -0.41411856 -0.30332172 -0.43661943 -0.50963747 -0.61804526 -0.82143658 -0.95207361 -1.007129 -1.12351256 -0.74667893 1.42122933 1.13385827 1.18497379 0.92903272 0.59292314 0.58084998 0.65192725 1.31203334 1.15530336 0.60156289 1.43433833 1.57231525 1.361918 1.57407123 1.10104004 -0.82047003 -0.6717897 -0.59381484 -0.40266963 -0.27443878 -0.40163268 -0.47645656 -0.57112574 -0.75359002 -0.90482991 -1.00654795 -1.10010001 0.84646653 1.33590939 1.12318718 1.05983988 1.30375784 1.41841835 1.3363515 0.78329442 0.72603604 1.06772811 1.03728983 0.94268209 1.58352665 1.40736874 1.56396874 0.96402622 -0.79100683 -0.64317699 -0.55055123 -0.35674061 -0.26298786 -0.36483148 -0.35501478 -0.56550535 -0.76427867 -0.88093481 -0.95714593 0.26300404 1.49151056 0.60123139 1.23314614 1.143365 1.10292773 1.21793326 1.30989735 1.11852481 1.34363077 1.37704795 -0.41238875 0.42876074 1.77110004 1.48771853 1.67709496 0.81572133 -0.7339355 -0.57912664 -0.47893486 -0.30785098 -0.2529033 -0.35287467 -0.42241314 -0.53742101 -0.69523159 -0.48322565 0.57649233 1.85134507 0.9703557 0.90721107 0.53503501 1.08207286 1.22790733 1.24437467 1.30849615 1.11971627 1.34908479 -0.64304466 -1.38817988 0.69940517 1.86107934 1.56810302 1.7514223 0.72922458 -0.67125106 -0.50747585 -0.42746762 -0.27689345 -0.21764707 -0.31533525 1.09716701 3.09682197 2.34175977 2.00796236 1.85994557 1.78597139 1.49141381 0.76297629 1.11039359 0.69358239 1.21783558 1.32207011 1.30769119 1.4354789 -0.5426532 -1.36111624 -1.24797109 0.81824301 1.96644103 1.71151651 1.86841471 0.54069192 -0.61478549 -0.41894205 -0.37391927 -0.23491109 -0.18236822 0.34035482 4.02444776 3.30920932 2.29452031 1.8472915 1.73635327 1.85955328 1.58154728 1.45891677 0.75783736 1.06110739 1.11682494 1.46006007 1.55251473 0.62714951 -1.26069746 -1.21787971 -1.12506426 0.83640561 2.11376884 1.84866534 1.99153545 0.45817771 -0.55353411 -0.33494561 -0.31442902 -0.19052615 -0.14160236 2.93079659 5.14991601 3.31015404 2.4402553 1.95391685 1.96093639 2.10885636 1.66470037 1.5670484 1.42605195 1.03439231 0.57767735 1.22668387 1.64488703 -1.0901502 -1.14072666 -1.04099027 -1.03382637 0.81150532 2.25649299 2.09431908 2.11219737 0.25860424 -0.50542985 -0.27819146 -0.26277875 -0.15540351 -0.09737914 0.22730653 4.98953189 4.07372805 2.88331858 2.24493644 2.21334692 2.30127177 1.80874389 1.60351937 1.52082639 1.52471192 1.38291296 1.67601794 -0.24487056 -0.97710244 -1.02967184 -0.98082293 -0.93945674 0.89027942 2.46430504 2.25517974 2.33765721 0.20729654 -0.45056135 -0.14513081 -0.21182513 -0.11366213 -0.05702124 -0.09821816 -0.1785151 0.31968873 1.37577775 1.90665939 2.4520196 2.67288921 2.2232822 1.87944656 1.67634924 1.53152839 1.4299862 -0.162791 -0.81210479 -0.85896501 -0.91661542 -0.87628179 -0.83240929 0.60715159 2.67395709 2.65972227 2.0834714 -0.26792583 -0.40009454 -0.14170013 -0.15920537 -0.08310377 -0.02780774 -0.0459571 -0.09752313 -0.20921424 -0.33708195 -0.41731463 -0.42712608 0.50525833 1.05313252 1.08014246 0.48423045 -0.21840563 -0.76160286 -0.69278859 -0.64574229 -0.68429498 -0.73804133 -0.66329112 -0.60337338 -0.47578426 1.14396189 0.57528488 0.0308716 -0.39704551 -0.28848398 -0.1579693 -0.0929556 -0.03456268] Associated target 9
MIT
Neural Network Error-Function.ipynb
Haytam222/Neural-Network
Create the model![](images/simple_nn.png) Create the model
# Flatten model = tf.keras.models.Sequential() #model.add(tf.keras.layers.Flatten(input_shape=[28, 28])) # Add the layers model.add(tf.keras.layers.Dense(256, activation="relu")) model.add(tf.keras.layers.Dense(128, activation="relu")) model.add(tf.keras.layers.Dense(10, activation="softmax")) model_output = model.predict(images[0:1]) print(model_output, targets[0:1])
[[0.17820482 0.05316375 0.07201441 0.1023543 0.02913541 0.17055362 0.06326886 0.24632096 0.02520118 0.05978273]] [9]
MIT
Neural Network Error-Function.ipynb
Haytam222/Neural-Network
Model Summary
model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) multiple 200960 _________________________________________________________________ dense_1 (Dense) multiple 32896 _________________________________________________________________ dense_2 (Dense) multiple 1290 ================================================================= Total params: 235,146 Trainable params: 235,146 Non-trainable params: 0 _________________________________________________________________
MIT
Neural Network Error-Function.ipynb
Haytam222/Neural-Network
Compile the model
# Compile the model model.compile( loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"] )
_____no_output_____
MIT
Neural Network Error-Function.ipynb
Haytam222/Neural-Network
Caterogical cross entropy
images_test = images[:5] labels_test = targets[:5] print(images_test.shape) print(labels_test) outputs_test = model.predict(images_test) print(outputs_test.shape) print("Output", outputs_test) #print("\nLabels", labels_test) filtered_outputs_test = outputs_test[np.arange(5), labels_test] print("\nFiltered output", filtered_outputs_test) log_filtered_output = np.log(filtered_outputs_test) print("\nLog Filtered output", log_filtered_output) print("Mean", log_filtered_output.mean()) print("Mean", -log_filtered_output.mean())
(5, 784) [9 0 0 3 0] (5, 10) Output [[0.00155602 0.00106303 0.00698406 0.00284724 0.01145798 0.03515041 0.01286932 0.0088392 0.04853413 0.8706986 ] [0.7814132 0.00516847 0.01360413 0.0021324 0.01019276 0.00489966 0.15709291 0.00928725 0.01079097 0.00541825] [0.07476368 0.22221217 0.0560216 0.36471143 0.03012619 0.0860177 0.05936769 0.04129038 0.03758162 0.02790763] [0.24922626 0.1875557 0.11506705 0.19797364 0.03391237 0.02039414 0.09415954 0.03401936 0.0236186 0.04407331] [0.12672135 0.25767553 0.00981988 0.46853614 0.06494886 0.01375138 0.0213926 0.01535748 0.01641455 0.00538218]] Filtered output [0.8706986 0.7814132 0.07476368 0.19797364 0.12672135] Log Filtered output [-0.13845943 -0.2466512 -2.5934231 -1.6196214 -2.0657647 ] Mean -1.3327839 Mean 1.3327839
MIT
Neural Network Error-Function.ipynb
Haytam222/Neural-Network
Train the model
history = model.fit(images, targets, epochs=1) loss_curve = history.history["loss"] acc_curve = history.history["accuracy"] plt.plot(loss_curve) plt.title("Loss") plt.show() plt.plot(acc_curve) plt.title("Accuracy") plt.show()
_____no_output_____
MIT
Neural Network Error-Function.ipynb
Haytam222/Neural-Network
get historical actual data
datafarm = DataFarm() historical_data = datafarm.run() week_data = historical_data.sum().sort_index() week_data = week_data # 2018年以来 没周人数 vacation_week_weight = {'201826':0.5, '201827':0.6, '201828':0.7, '201829':0.8, '201830':0.9, '201831':0.9, '201832':0.9, '201833':1, '201834':1, '201835':1, '201904': 0.4, '201905': 0.4, '201906': -0.1, '201907': 0.5, '201908': 0.5, # 虽然没有全天 但是暑假券还能用 '201909': 0.5 # 虽然没有全天 但是暑假券还能用 } whole_day_week = pd.Series(0, index = week_data.index, dtype='float') # 寒暑假课 for i, value in vacation_week_weight.items(): whole_day_week[i] = value # 每周数据 plt.figure(0) week_data.plot(figsize=(80, 19), label='111', kind='bar')
_____no_output_____
MIT
jiuqu/student_count_predict.ipynb
LeonKennedy/jupyter_space
$y_i = c_1e^{-x_i} + c_2x_i + c_3z_i$
y = week_data.values x = np.r_[1:len(y)+1] z = whole_day_week.values # A = np.c_[np.exp(-x)[:, np.newaxis], x[:, np.newaxis], z[:, np.newaxis]] A = np.c_[x[:, np.newaxis], z[:, np.newaxis]] c, resid, rank, sigma = linalg.lstsq(A, y) xi2 = np.r_[1:len(y):10j] yi2 = c[0]*x + c[1]*z plt.figure(1, figsize=(80,30)) plt.bar(x, y) plt.plot(x,yi2, 'r') # plt.axis([0,1.1,3.0,5.5]) plt.xlabel('$x_i$') plt.title('Data fitting with linalg.lstsq') plt.show()
_____no_output_____
MIT
jiuqu/student_count_predict.ipynb
LeonKennedy/jupyter_space
减去寒暑假影响后 使用exponentially weighted windows
plt.figure(2, figsize=(80, 19)) base_data = week_data - c[1]*z base_data.plot(style='b--') base_data.ewm(span=5).mean().plot(style='r')
_____no_output_____
MIT
jiuqu/student_count_predict.ipynb
LeonKennedy/jupyter_space
细粒度拟合 (roomtype, chapter)
historical_data.groupby(level=[1,2]).sum()
_____no_output_____
MIT
jiuqu/student_count_predict.ipynb
LeonKennedy/jupyter_space
First Let's Get the Butler and a Tract
butler = Butler(REPOS['2.2i_dr6_wfd']) skymap = butler.get("deepCoadd_skyMap") # this list is hard coded - the gen 2 butler doesn't have a method for introspection meta = {} meta["all_tracts"] = """2723 2730 2897 2904 3076 3083 3259 3266 3445 3452 3635 3642 3830 3837 4028 4035 4230 4428 4435 4636 4643 4851 4858 5069 2724 2731 2898 2905 3077 3084 3260 3267 3446 3453 3636 3643 3831 4022 4029 4224 4231 4429 4436 4637 4644 4852 4859 5070 2725 2732 2899 2906 3078 3085 3261 3268 3447 3454 3637 3825 3832 4023 4030 4225 4232 4430 4437 4638 4645 4853 4860 5071 2726 2733 2900 2907 3079 3086 3262 3441 3448 3631 3638 3826 3833 4024 4031 4226 4233 4431 4438 4639 4646 4854 5065 5072 2727 2734 2901 2908 3080 3256 3263 3442 3449 3632 3639 3827 3834 4025 4032 4227 4234 4432 4439 4640 4647 4855 5066 5073 2728 2735 2902 3074 3081 3257 3264 3443 3450 3633 3640 3828 3835 4026 4033 4228 4235 4433 4440 4641 4648 4856 5067 5074 2729 2896 2903 3075 3082 3258 3265 3444 3451 3634 3641 3829 3836 4027 4034 4229 4236 4434 4441 4642 4850 4857 5068""".split() ti = skymap[4030]
_____no_output_____
BSD-3-Clause
examples/cosmoDC2_galaxy_hexgrid_matching_example.ipynb
LSSTDESC/ssi-tools
Make a Grid of objectsWe will use the `fsi_tools` package to make a hexagonal grid of object positions.
grid = make_hexgrid_for_tract(ti, rng=10) plt.figure() plt.plot(grid["x"], grid["y"], '.') ax = plt.gca() ax.set_aspect('equal') plt.xlim(0, 640) plt.ylim(0, 640)
_____no_output_____
BSD-3-Clause
examples/cosmoDC2_galaxy_hexgrid_matching_example.ipynb
LSSTDESC/ssi-tools
Neat! Building the Source Catalog
srcs = fitsio.read( "/global/cfs/cdirs/lsst/groups/fake-source-injection/DC2/catalogs/" "cosmoDC2_v1.1.4_small_fsi_catalog.fits", ) msk = srcs["rmagVar"] <= 25 srcs = srcs[msk] rng = np.random.RandomState(seed=10) inds = rng.choice(len(srcs), size=len(grid), replace=True) tract_sources = srcs[inds].copy() tract_sources["raJ2000"] = np.deg2rad(grid["ra"]) tract_sources["decJ2000"] = np.deg2rad(grid["dec"])
_____no_output_____
BSD-3-Clause
examples/cosmoDC2_galaxy_hexgrid_matching_example.ipynb
LSSTDESC/ssi-tools
Now Cut to Just the First Patch
patch = ti[0] # TODO: we may want to expand this box to account for nbrs light msk = patch.getOuterBBox().contains(grid["x"], grid["y"]) print("found %d objects in patch %s" % (np.sum(msk), "%d,%d" % patch.getIndex()))
found 4741 objects in patch 0,0
BSD-3-Clause
examples/cosmoDC2_galaxy_hexgrid_matching_example.ipynb
LSSTDESC/ssi-tools
Run the Stack FSI Code
fitsio.write("ssi.fits", tract_sources[msk], clobber=True) %%time !insertFakes.py \ /global/cfs/cdirs/lsst/production/DC2_ImSim/Run2.2i/desc_dm_drp/v19.0.0-v1/rerun/run2.2i-coadd-wfd-dr6-v1 \ --output test/ \ --id tract=4030 patch=0,0 \ filter=r -c fakeType=ssi.fits \ --clobber-config --no-versions
CameraMapper INFO: Loading exposure registry from /global/cfs/cdirs/lsst/shared/DC2-prod/Run2.2i/desc_dm_drp/v19.0.0/registry.sqlite3 CameraMapper INFO: Loading calib registry from /global/cfs/cdirs/lsst/shared/DC2-prod/Run2.2i/desc_dm_drp/v19.0.0/CALIB/calibRegistry.sqlite3 CameraMapper INFO: Loading calib registry from /global/cfs/cdirs/lsst/shared/DC2-prod/Run2.2i/desc_dm_drp/v19.0.0/CALIB/calibRegistry.sqlite3 CameraMapper INFO: Loading calib registry from /global/cfs/cdirs/lsst/production/DC2_ImSim/Run2.2i/desc_dm_drp/v19.0.0-v1/CALIB/calibRegistry.sqlite3 LsstCamMapper WARN: Unable to find valid calib root directory LsstCamMapper WARN: Unable to find valid calib root directory CameraMapper INFO: Loading calib registry from /global/cfs/cdirs/lsst/shared/DC2-prod/Run2.2i/desc_dm_drp/v19.0.0/CALIB/calibRegistry.sqlite3 CameraMapper INFO: Loading exposure registry from /global/cfs/cdirs/lsst/shared/DC2-prod/Run2.2i/desc_dm_drp/v19.0.0/registry.sqlite3 CameraMapper INFO: Loading calib registry from /global/cfs/cdirs/lsst/shared/DC2-prod/Run2.2i/desc_dm_drp/v19.0.0/CALIB/calibRegistry.sqlite3 LsstCamMapper WARN: Unable to find valid calib root directory CameraMapper INFO: Loading calib registry from /global/cfs/cdirs/lsst/shared/DC2-prod/Run2.2i/desc_dm_drp/v19.0.0/CALIB/calibRegistry.sqlite3 CameraMapper INFO: Loading exposure registry from /global/cfs/cdirs/lsst/shared/DC2-prod/Run2.2i/desc_dm_drp/v19.0.0/registry.sqlite3 CameraMapper INFO: Loading calib registry from /global/cfs/cdirs/lsst/shared/DC2-prod/Run2.2i/desc_dm_drp/v19.0.0/CALIB/calibRegistry.sqlite3 CameraMapper INFO: Loading exposure registry from /global/cfs/cdirs/lsst/production/DC2_ImSim/Run2.2i/desc_dm_drp/v19.0.0-v1/registry.sqlite3 CameraMapper INFO: Loading calib registry from /global/cfs/cdirs/lsst/production/DC2_ImSim/Run2.2i/desc_dm_drp/v19.0.0-v1/CALIB/calibRegistry.sqlite3 root INFO: Running: /global/homes/b/beckermr/.conda/envs/dmstack2020.40w/lsst_home/stack/miniconda/Linux64/pipe_tasks/20.0.0-28-g282f9e7e+feda6aebd8/bin/insertFakes.py /global/cfs/cdirs/lsst/production/DC2_ImSim/Run2.2i/desc_dm_drp/v19.0.0-v1/rerun/run2.2i-coadd-wfd-dr6-v1 --output test/ --id tract=4030 patch=0,0 filter=r -c fakeType=fsi.fits --clobber-config --no-versions insertFakes INFO: Adding fakes to: tract: 4030, patch: 0,0, filter: r insertFakes INFO: Adding mask plane with bitmask 131072 numexpr.utils INFO: Note: NumExpr detected 64 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. insertFakes INFO: Removing 0 rows with HLR = 0 for either the bulge or disk insertFakes INFO: Removing 0 rows of galaxies with nBulge or nDisk outside of 0.30 <= n <= 6.20 insertFakes INFO: Making 4738 fake galaxy images /global/homes/b/beckermr/.conda/envs/dmstack2020.40w/lib/python3.7/site-packages/galsim/errors.py:420: GalSimWarning: A component to be convolved is not analytic in real space. Cannot use real space convolution. Switching to DFT method. warnings.warn(message, GalSimWarning) insertFakes INFO: Making 0 fake star images CPU times: user 17.8 s, sys: 3.4 s, total: 21.2 s Wall time: 16min 46s
BSD-3-Clause
examples/cosmoDC2_galaxy_hexgrid_matching_example.ipynb
LSSTDESC/ssi-tools
Make an Image
butler = Butler("./test/") cutoutSize = geom.ExtentI(1001, 1001) ra = np.mean(tract_sources[msk]["raJ2000"]) / np.pi * 180.0 dec = np.mean(tract_sources[msk]["decJ2000"]) / np.pi * 180.0 point = geom.SpherePoint(ra, dec, geom.degrees) skymap = butler.get("deepCoadd_skyMap") tractInfo = skymap.findTract(point) patchInfo = tractInfo.findPatch(point) xy = geom.PointI(tractInfo.getWcs().skyToPixel(point)) bbox = geom.BoxI(xy - cutoutSize//2, cutoutSize) coaddId = { 'tract': tractInfo.getId(), 'patch': "%d,%d" % patchInfo.getIndex(), 'filter': 'r' } print(coaddId) image = butler.get("deepCoadd_sub", bbox=bbox, immediate=True, dataId=coaddId) fake_image = butler.get("fakes_deepCoadd_sub", bbox=bbox, immediate=True, dataId=coaddId) fig, axes = plt.subplots(ncols=3, figsize=(15, 5)) axes[0].imshow(np.arcsinh(image.maskedImage.image.array/np.sqrt(image.variance.array))) axes[0].set_title("image") axes[1].imshow(np.arcsinh(fake_image.maskedImage.image.array/np.sqrt(image.variance.array))) axes[1].set_title("FSI image") axes[2].imshow(np.arcsinh((fake_image.maskedImage.image.array - image.maskedImage.image.array)/1e-3)) axes[2].set_title("diff. image") plt.show()
{'tract': 4030, 'patch': '0,0', 'filter': 'r'}
BSD-3-Clause
examples/cosmoDC2_galaxy_hexgrid_matching_example.ipynb
LSSTDESC/ssi-tools