markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
当然你也可以为多个对象指定多个变量,例如:
x,y,z = "So long","and thanks for all","the fish" print x,y,z
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
赋值的语法糖(Syntax Sugar):
x,y = "So long",42 x,y = y,x print x print y
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
Python同样拥有增量赋值方法:
a = 42 a += 1 print a a -= 1 a *= 2 print a
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
类似的能够赋值的操作符还有很多,包括: *= 自乘 /= 自除 %= 自取模 **= 自乘方 <<= 自左移位 >>= 自右移位 &= 自按位与 ^= 自按位异或 等等。 注意:Python不支持x++ 或者 --x 这类操作。 2.3 Python数值类型 int: 42 126 -680 -0x92 bool: True False float: 3.1415926 -90.00 6.022e23 complex: 6.23+1.5j -1.23-875j 0+1j type: 判断类型 isinstance: 判断属于特定类型(推荐)
a,b,c,d = 42,True,3.1415926,6.23+1.5j print type(a),type(b),type(c),type(d) print isinstance(a,int),isinstance(b,(float,int)),\ isinstance(c,float),isinstance(d,(int,bool)) # 可以分别取得复数的实部和虚部 print d.real print d.imag
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
工业级的计算器: 基础操作符: + - * / // % ** 比较操作符: < <= > >= == !=
d = 6.23+1.5j e = 0+1j print d+e,d-e,d*e print 7*3,7**2 # x**y 返回x的y次幂 print 8%3,float(10)/3,10.0/3,10//3 print 1==2, 1 != 2 #==表示判断是否相等 != 表示不相等 print 7%2 #返回除法的余数 print 1<42<100
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
是时候展现真正的除法了:
print 7/4 from __future__ import division print 7/4
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
普通floor除法:
print 7//4
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
布尔值:
print not True print not False print 40 < 42 <= 100 print not 40 > 30 print 40 > 30 and 40 < 42 <= 100 print 40 < 30 or 40 < 42 <=100
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
2.4 Python字符串类型 字符串的长度len与切片:
c = "Hello world" print len(c) print c[0],c[1:3] print c[-1],c[:] print c[::2],c[::-1]
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
复杂切片操作:[start:end:step]
c = "Hello world" print c[::-1] #翻转字符串 print c[:] #原样复制字符串 print c[::2] #隔一个取一个 print c[:8] #前八个字母 print c[:8:2] #前八个字母,每两个取一个
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
复制:
c = "Hello world" d = c[:] #复制字符串,赋值给d del c #删除原字符串 print d #d 字符串依然可用 e = d[0:4] print e #新构造一个截取d字符串部分所组成的串 f = e print id(f) print id(e) del e print id(f),f #仍然可用
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
加号(+)用于字符串连接运算,星号(*)用来重复字符串。
teststr,stringback="Clojure is","cool" print '-'*20 print teststr+stringback print teststr*3 print '-'*20
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
美化打印:
pystr,pystring="Clojure is","cool" print '-'*20 print pystr,pystring print '-'*20 print pystr+'\t'+pystring print '-'*20 pystr = "Clojure" pystring = "cool" yastr = "LISP" yastring = "wonderful " print('Python\'s Format I : {0} is {1}'.format(pystr,pystring)) print 'Python\'s Format II: {language} is {description}'.\ format(language='Scala',description='awesome') #使用\接续换行 print 'C Style Print: %s is %s'%(yastr,yastring)
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
字符串复杂效果举例:
for i in range(0,5)+range(2,8)+range(3,12)+[2,2]: print' '*(40-2*i-i//2)+'*'*(4*i+1+i)
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
尾部换行\:
"A:What's your favorite language?\ B:C++."
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
\t:水平制表符:
print "A:What’s your favorite language?\nB:C++" print "A:What’s your favorite language?\tB:C++"
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
注意其他的转义字符(反斜线+字符):
print 'What\'s your favorite language?' print "What's your favorite language?" print "What\"s your favorite language?\\"
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
其他操作 —— Join,Split,Strip, Upper, Lower
s = "a\tb\tc\td" print s l = s.split('\t') print l snew = ','.join(l) print snew line = '\t Blabla \t \n' print line.strip() print line.lstrip() print line.rstrip() salpha = 'Abcdefg' print salpha.upper() print salpha.lower() #isupper islower isdigit isalpha
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
更多操作: 查阅dir('s') 查阅codecs 查阅re(regexp,正则表达式) 2.5 初识Python可变类型(Mutables) 字符串(string):不可变 字节数组(bytearray):可变
s = "string" print type(s) s[3] = "o" s = "String" sba = bytearray(s) sba[3] = "o" print sba
Series_0_Python_Tutorials/S0EP1_Python_Basics.ipynb
Wx1ng/Python4DataScience.CH
cc0-1.0
This FASTA file shown above has just one sequence in it. As we saw in the first example above, it's also possible for one FASTA file to contain multiple sequences. These are sometimes called multi-FASTA files. When you write code to interpret FASTA files, it's a good idea to always allow for the possibility that the FASTA file might contain multiple sequences. FASTA files are often stored with the .fa file name extension, but this is not a rule. .fasta is another popular extenson. You may also see .fas, .fna, .mfa (for multi-FASTA), and others. Parsing FASTA Here is a simple function for parsing a FASTA file into a Python dictionary. The dictionary maps short names to corresponding nucleotide strings (with whitespace removed).
def parse_fasta(fh): fa = {} current_short_name = None # Part 1: compile list of lines per sequence for ln in fh: if ln[0] == '>': # new name line; remember current sequence's short name long_name = ln[1:].rstrip() current_short_name = long_name.split()[0] fa[current_short_name] = [] else: # append nucleotides to current sequence fa[current_short_name].append(ln.rstrip()) # Part 2: join lists into strings for short_name, nuc_list in fa.items(): # join this sequence's lines into one long string fa[short_name] = ''.join(nuc_list) return fa
notebooks/FASTA.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
The first part accumulates a list of strings (one per line) for each sequence. The second part joins those lines together so that we end up with one long string per sequence. Why divide it up this way? Mainly to avoid the poor performance of repeatedly concatenating (immutable) Python strings. I'll test it by running it on the simple multi-FASTA file we saw before:
from io import StringIO fasta_example = StringIO( '''>sequence1_short_name with optional additional info after whitespace ACATCACCCCATAAACAAATAGGTTTGGTCCTAGCCTTTCTATTAGCTCTTAGTAAGATTACACATGCAA GCATCCCCGTTCCAGTGAGTTCACCCTCTAAATCACCACGATCAAAAGGAACAAGCATCAAGCACGCAGC AATGCAGCTCAAAACGCTTAGCCTAGCCACACCCCCACGGGAAACAGCAGTGAT >sequence2_short_name with optional additional info after whitespace GCCCCAAACCCACTCCACCTTACTACCAGACAACCTTAGCCAAACCATTTACCCAAATAAAGTATAGGCG ATAGAAATTGAAACCTGGCGCAATAGATATAGTACCGCAAGGGAAAGATGAAAAATTATAACCAAGCATA ATATAG''') parsed_fa = parse_fasta(fasta_example) parsed_fa
notebooks/FASTA.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
Note that only the short names survive. This is usually fine, but it's not hard to modify the function so that information relating short names to long names is also retained. Indexed FASTA Say you have one or more big FASTA files (e.g. the entire human reference genome) and you'd like to access those files "randomly," peeking at substrings here and there without any regular access pattern. Maybe you're mimicking a sequencing machine, reading snippets of DNA here and there. You could start by using the parse_fasta function defined above to parse the FASTA files. Then, to access a substring, do as follows:
parsed_fa['sequence2_short_name'][100:130]
notebooks/FASTA.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
Accessing a substring in this way is very fast and simple. The downside is that you've stored all of the sequences in memory. If the FASTA files are really big, this takes lots of valuable memory. This may or may not be a good trade. An alternative is to load only the portions of the FASTA files that you need, when you need them. For this to be practical, we have to have a way of "jumping" to the specific part of the specific FASTA file that you're intersted in. Fortunately, there is a standard way of indexing a FASTA file, popularized by the faidx tool in SAMtools. When you have such an index, it's easy to calculate exactly where to jump to when you want to extract a specific substring. Here is some Python to create such an index:
def index_fasta(fh): index = [] current_short_name = None current_byte_offset, running_seq_length, running_byte_offset = 0, 0, 0 line_length_including_ws, line_length_excluding_ws = 0, 0 for ln in fh: ln_stripped = ln.rstrip() running_byte_offset += len(ln) if ln[0] == '>': if current_short_name is not None: index.append((current_short_name, running_seq_length, current_byte_offset, line_length_excluding_ws, line_length_including_ws)) long_name = ln_stripped[1:] current_short_name = long_name.split()[0] current_byte_offset = running_byte_offset running_seq_length = 0 else: line_length_including_ws = max(line_length_including_ws, len(ln)) line_length_excluding_ws = max(line_length_excluding_ws, len(ln_stripped)) running_seq_length += len(ln_stripped) if current_short_name is not None: index.append((current_short_name, running_seq_length, current_byte_offset, line_length_excluding_ws, line_length_including_ws)) return index
notebooks/FASTA.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
Here we use it to index a small multi-FASTA file. We print out the index at the end.
fasta_example = StringIO( '''>sequence1_short_name with optional additional info after whitespace ACATCACCCCATAAACAAATAGGTTTGGTCCTAGCCTTTCTATTAGCTCTTAGTAAGATTACACATGCAA GCATCCCCGTTCCAGTGAGTTCACCCTCTAAATCACCACGATCAAAAGGAACAAGCATCAAGCACGCAGC AATGCAGCTCAAAACGCTTAGCCTAGCCACACCCCCACGGGAAACAGCAGTGAT >sequence2_short_name with optional additional info after whitespace GCCCCAAACCCACTCCACCTTACTACCAGACAACCTTAGCCAAACCATTTACCCAAATAAAGTATAGGCG ATAGAAATTGAAACCTGGCGCAATAGATATAGTACCGCAAGGGAAAGATGAAAAATTATAACCAAGCATA ATATAG''') idx = index_fasta(fasta_example) idx
notebooks/FASTA.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
What do the fields in those two records mean? Take the first record: ('sequence1_short_name', 194, 69, 70, 71). The fields from left to right are (1) the short name, (2) the length (in nucleotides), (3) the byte offset in the FASTA file of the first nucleotide of the sequence, (4) the maximum number of nucleotides per line, and (5) the maximum number of bytes per line, including whitespace. It's not hard to convince yourself that, if you know all these things, it's not hard to figure out the byte offset of any position in any of the sequences. (This is what the get member of the FastaIndexed class defined below does.) A typical way to build a FASTA index like this is to use SAMtools, specifically the samtools faidx command. This and all the other samtools commands are documented in its manual. When you use a tool like this to index a FASTA file, a new file containing the index is written with an additional .fai extension. E.g. if the FASTA file is named hg19.fa, then running samtools faidx hg19.fa will create a new file hg19.fa.fai containing the index. The following Python class shows how you might use the FASTA file together with its index to extract arbitrary substrings without loading all of the sequences into memory:
import re class FastaOOB(Exception): """ Out-of-bounds exception for FASTA sequences """ def __init__(self, value): self.value = value def __str__(self): return repr(self.value) class FastaIndexed(object): """ Encapsulates a set of indexed FASTA files. Does not load the FASTA files into memory but still allows the user to extract arbitrary substrings, with the help of the index. """ __removeWs = re.compile(r'\s+') def __init__(self, fafns): self.fafhs = {} self.faidxs = {} self.chr2fh = {} self.offset = {} self.lens = {} self.charsPerLine = {} self.bytesPerLine = {} for fafn in fafns: # Open FASTA file self.fafhs[fafn] = fh = open(fafn, 'r') # Parse corresponding .fai file with open(fafn + '.fai') as idxfh: for ln in idxfh: toks = ln.rstrip().split() if len(toks) == 0: continue assert len(toks) == 5 # Parse and save the index line chr, ln, offset, charsPerLine, bytesPerLine = toks self.chr2fh[chr] = fh self.offset[chr] = int(offset) # 0-based self.lens[chr] = int(ln) self.charsPerLine[chr] = int(charsPerLine) self.bytesPerLine[chr] = int(bytesPerLine) def __enter__(self): return self def __exit__(self, type, value, traceback): # Close all the open FASTA files for fafh in self.fafhs.values(): fafh.close() def has_name(self, refid): return refid in self.offset def name_iter(self): return self.offset.iterkeys() def length_of_ref(self, refid): return self.lens[refid] def get(self, refid, start, ln): ''' Return the specified substring of the reference. ''' assert refid in self.offset if start + ln > self.lens[refid]: raise ReferenceOOB('"%s" has length %d; tried to get [%d, %d)' % (refid, self.lens[refid], start, start + ln)) fh, offset, charsPerLine, bytesPerLine = \ self.chr2fh[refid], self.offset[refid], \ self.charsPerLine[refid], self.bytesPerLine[refid] byteOff = offset byteOff += (start // charsPerLine) * bytesPerLine into = start % charsPerLine byteOff += into fh.seek(byteOff) left = charsPerLine - into # Count the number of line breaks interrupting the rest of the # string we're trying to read if ln < left: return fh.read(ln) else: nbreaks = 1 + (ln - left) // charsPerLine res = fh.read(ln + nbreaks * (bytesPerLine - charsPerLine)) res = re.sub(self.__removeWs, '', res) return res
notebooks/FASTA.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
Here's an example of how to use the class defined above.
# first we'll write a new FASTA file with open('tmp.fa', 'w') as fh: fh.write('''>sequence1_short_name with optional additional info after whitespace ACATCACCCCATAAACAAATAGGTTTGGTCCTAGCCTTTCTATTAGCTCTTAGTAAGATTACACATGCAA GCATCCCCGTTCCAGTGAGTTCACCCTCTAAATCACCACGATCAAAAGGAACAAGCATCAAGCACGCAGC AATGCAGCTCAAAACGCTTAGCCTAGCCACACCCCCACGGGAAACAGCAGTGAT >sequence2_short_name with optional additional info after whitespace GCCCCAAACCCACTCCACCTTACTACCAGACAACCTTAGCCAAACCATTTACCCAAATAAAGTATAGGCG ATAGAAATTGAAACCTGGCGCAATAGATATAGTACCGCAAGGGAAAGATGAAAAATTATAACCAAGCATA ATATAG''') with open('tmp.fa') as fh: idx = index_fasta(fh) with open('tmp.fa.fai', 'w') as fh: fh.write('\n'.join(['\t'.join(map(str, x)) for x in idx])) with FastaIndexed(['tmp.fa']) as fa_idx: print(fa_idx.get('sequence2_short_name', 100, 30))
notebooks/FASTA.ipynb
BenLangmead/comp-genomics-class
gpl-2.0
Questions: (Note: to answer the following, open Google Earth and enter Betasso Preserve in the search bar. Zoom out a bit to view the area around Betasso) (1) Use a screen shot to place a copy of this image in your lab document. Label Boulder Creek Canyon and draw an arrow to show its flow direction. (2) Indicate and label the confluence of Fourmile Creek and Boulder Canyon. (3) What is the mean altitude? What is the maximum altitude? (Hint: see numpy functions mean and amax) Make a slope map Use the numpy gradient function to make an image of absolute maximum slope angle at each cell:
def slope_gradient(z): """ Calculate absolute slope gradient elevation array. """ x, y = np.gradient(z) #slope = (np.pi/2. - np.arctan(np.sqrt(x*x + y*y))) slope = np.sqrt(x*x + y*y) return slope sb = slope_gradient(zb)
dem_processing_with_gdal_python.ipynb
cmshobe/dem_analysis_with_gdal
mit
Let's see what it looks like:
plt.imshow(sb, vmin=0.0, vmax=1.0, cmap='pink') print np.median(sb)
dem_processing_with_gdal_python.ipynb
cmshobe/dem_analysis_with_gdal
mit
We can make a histogram (frequency diagram) of aspect. Here 0 degrees is east-facing, 90 is north-facing, 180 is west-facing, and 270 is south-facing.
abdeg = (180./np.pi)*ab # convert to degrees n, bins, patches = plt.hist(abdeg.flatten(), 50, normed=1, facecolor='green', alpha=0.75)
dem_processing_with_gdal_python.ipynb
cmshobe/dem_analysis_with_gdal
mit
Using NLTK to extract Unigram and Bigram Ref: Chen Sun, Chuang Gan and Ram Nevatia, Automatic Concept Discovery from Parallel Text and Visual Corpora. ICCV 2015 <img src="files/iccv_paper_concepts.png">
from nltk.util import ngrams sentence = 'A black-dog and a spotted dog are fighting.' n = 2 sixgrams = ngrams(sentence.split(), n) for grams in sixgrams: print grams
text_features.ipynb
surenkum/eecs_542
gpl-3.0
Some of the bigrams are obviously not relevant. So we tokenize and exclude stop words to get some relevant classes.
from nltk.corpus import stopwords from nltk.tokenize import wordpunct_tokenize stop_words = set(stopwords.words('english')) stop_words.update(['.', ',', '"', "'", '?', '!', ':', ';', '(', ')', '[', ']', '{', '}','-']) # remove it if you need punctuation list_of_words = [i.lower() for i in wordpunct_tokenize(sentence) if i.lower() not in stop_words] bigrams = ngrams(list_of_words,2) for grams in bigrams: print grams
text_features.ipynb
surenkum/eecs_542
gpl-3.0
Using Scikit-Learn to explore some text datasets 20 Newsgroup dataset The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. This dataset is often used for text classification and text clustering. Some of the newsgroups are very closely related to each other (e.g. comp.sys.ibm.pc.hardware / comp.sys.mac.hardware), while others are highly unrelated (e.g misc.forsale / soc.religion.christian). From: http://qwone.com/~jason/20Newsgroups/
%run fetch_data.py twenty_newsgroups from sklearn.datasets import load_files from sklearn.feature_extraction.text import TfidfVectorizer # Tf IDF feature extraction from sklearn.feature_extraction.text import CountVectorizer # Count and vectorize text feature # Load the text data categories = [ 'alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space', ] twenty_train_small = load_files('./datasets/20news-bydate-train/', categories=categories, encoding='latin-1') twenty_test_small = load_files('./datasets/20news-bydate-test/', categories=categories, encoding='latin-1') # Lets display some of the data def display_sample(i, dataset): target_id = dataset.target[i] print("Class id: %d" % target_id) print("Class name: " + dataset.target_names[target_id]) print("Text content:\n") print(dataset.data[i]) display_sample(0,twenty_train_small)
text_features.ipynb
surenkum/eecs_542
gpl-3.0
Extracting features Lets extract vector counts to convert text to a vector.
count_vect = CountVectorizer(min_df=2) X_train_counts = count_vect.fit_transform(twenty_train_small.data) print X_train_counts.shape
text_features.ipynb
surenkum/eecs_542
gpl-3.0
Lets extract TF-IDF features from text data. min_df option is to put a lower bound to ignore terms that have a low document frequency.
# Extract features # Turn the text documents into vectors of word frequencies with tf-idf weighting vectorizer = TfidfVectorizer(min_df=2) X_train = vectorizer.fit_transform(twenty_train_small.data) y_train = twenty_train_small.target print type(X_train) print X_train.shape
text_features.ipynb
surenkum/eecs_542
gpl-3.0
As observed, X_train is a scipy sparse matrix consisting of 2034 rows (number of text files) and 17566 different features (unique words)
print type(vectorizer.vocabulary_) # Type of vocabulary print len(vectorizer.vocabulary_) # Length of vocabulary print vectorizer.get_feature_names()[:10] # Print first 10 elements of dictionary print vectorizer.get_feature_names()[-10:] # Print last 10 elements of dictionary
text_features.ipynb
surenkum/eecs_542
gpl-3.0
Visualizing Feature Space Obviously, its hard to make any sense of such high-dimensional feature space. A good technique to visualize such data is to project it to lower dimensions using PCA and then visualizing low-dimensional splace.
from sklearn.decomposition import TruncatedSVD X_train_pca = TruncatedSVD(n_components=2).fit_transform(X_train) from itertools import cycle colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k'] for i, c in zip(np.unique(y_train), cycle(colors)): plt.scatter(X_train_pca[y_train == i, 0], X_train_pca[y_train == i, 1], c=c, label=twenty_train_small.target_names[i], alpha=0.8) _ = plt.legend(loc='best')
text_features.ipynb
surenkum/eecs_542
gpl-3.0
Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.) With the L2 penalty specified above, fit the model and print out the learned weights. Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
poly1_data = polynomial_sframe(sales['sqft_living'], 1) features = poly1_data.column_names() poly1_data['price'] = sales['price'] # add price to the data since it's the target model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = features, l2_penalty=l2_small_penalty, validation_set = None) #let's take a look at the weights before we plot model1.get("coefficients")
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model. Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
def print_coefficients(data_set, l2_penalty): ps = polynomial_sframe(data_set['sqft_living'], 15) my_features = ps.column_names() ps['price'] = data_set['price'] model = graphlab.linear_regression.create(ps, target = 'price', features = my_features, validation_set = None, verbose = False, l2_penalty=l2_penalty) model.get("coefficients").print_rows(num_rows = 16) for i in [set_1, set_2, set_3, set_4]: print_coefficients(i, l2_small_penalty)
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
The four curves should differ from one another a lot, as should the coefficients you learned. QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
smallest=-759.251854206 largest=1247.59034572
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Ridge regression comes to rescue Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.) With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
l2_penalty = 1e5 for i in [set_1, set_2, set_3, set_4]: print_coefficients(i, l2_penalty)
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
These curves should vary a lot less, now that you applied a high degree of regularization. QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
smallest=1.91040938244 largest=2.58738875673
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1. With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
n = len(train_valid_shuffled) print n - 7757 k = 10 # 10-fold cross-validation for i in xrange(k): start = (n*i)/k end = (n*(i+1))/k-1 print i, (start, end)
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above. Extract the fourth segment (segment 3) and assign it to a variable called validation4.
validation4 = train_valid_shuffled[5818:7757]
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
p1 = train_valid_shuffled[0:5817] p2 = train_valid_shuffled[n-11639:n] train4 = p1.append(p2)
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets. For each i in [0, 1, ..., k-1]: Compute starting and ending indices of segment i and call 'start' and 'end' Form validation set by taking a slice (start:end+1) from the data. Form training set by appending slice (end+1:n) to the end of slice (0:start). Train a linear model using training set just formed, with a given l2_penalty Compute validation error using validation set just formed
def k_fold_cross_validation(k, l2_penalty, data, features_list): n = len(data) rss_k = list() for i in xrange(k): start = (n*i)/k end = (n*(i+1))/k-1 validation_set = data[start:end + 1] training_set = data[end + 1:n].append(data[0:start]) model = graphlab.linear_regression.create(training_set, target = 'price', features = features_list, validation_set = None, verbose = False, l2_penalty=l2_penalty) predictions = model.predict(validation_set) residuals = validation_set['price'] - predictions RSS = sum(residuals**2) rss_k.append(RSS) return sum(rss_k)/len(rss_k)
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following: * We will again be aiming to fit a 15th-order polynomial model using the sqft_living input * For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).) * Run 10-fold cross-validation with l2_penalty * Report which L2 penalty produced the lowest average validation error. Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
import numpy as np ps = polynomial_sframe(train_valid_shuffled['sqft_living'], 15) my_features = ps.column_names() ps['price'] = train_valid_shuffled['price'] k=10 result = dict() for l2_penalty in np.logspace(1, 7, num=13): result[l2_penalty] = k_fold_cross_validation(k, l2_penalty, ps, my_features)
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
best_penalty = min(result, key=result.get) print "the best value for the L2 penalty according to 10-fold validation:", best_penalty
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis. # Using plt.xscale('log') will make your plot more intuitive.
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset. QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?
ps = polynomial_sframe(train_valid['sqft_living'], 15) features = ps.column_names() ps['price'] = train_valid['price'] # add price to the data since it's the target best_model = graphlab.linear_regression.create(ps, target = 'price', features = features, l2_penalty=best_penalty, validation_set = None) predictions = best_model.predict(test) residuals = test['price'] - predictions print "RSS:", sum(residuals**2)
course-2-regression/notebooks/week-4-ridge-regression-assignment-1-blank.ipynb
kgrodzicki/machine-learning-specialization
mit
Working with Text Data <img src="figures/bag_of_words.svg" width=100%>
import pandas as pd import os data = pd.read_csv(os.path.join("data", "train.csv")) len(data) data y_train = np.array(data.Insult) y_train text_train = data.Comment.tolist() text_train[6] data_test = pd.read_csv(os.path.join("data", "test_with_solutions.csv")) text_test, y_test = data_test.Comment.tolist(), np.array(data_test.Insult) from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer() cv.fit(text_train) len(cv.vocabulary_) print(cv.get_feature_names()[:50]) print(cv.get_feature_names()[-50:]) X_train = cv.transform(text_train) X_train text_train[6] X_train[6, :].nonzero()[1] X_test = cv.transform(text_test) from sklearn.svm import LinearSVC svm = LinearSVC() svm.fit(X_train, y_train) svm.score(X_train, y_train) svm.score(X_test, y_test) def visualize_coefficients(classifier, feature_names, n_top_features=25): # get coefficients with large absolute values coef = classifier.coef_.ravel() positive_coefficients = np.argsort(coef)[-n_top_features:] negative_coefficients = np.argsort(coef)[:n_top_features] interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients]) # plot them plt.figure(figsize=(15, 5)) colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]] plt.bar(np.arange(50), coef[interesting_coefficients], color=colors) feature_names = np.array(feature_names) plt.xticks(np.arange(1, 51), feature_names[interesting_coefficients], rotation=60, ha="right"); visualize_coefficients(svm, cv.get_feature_names())
10 - Working With Text Data.ipynb
amueller/sklearn_workshop
bsd-2-clause
Exercises Create a pipeine using the count vectorizer and SVM (see 07). Train and score using the pipeline. Vary the n_gram_range in the count vectorizer, visualize the changed coefficients. Grid search the C in the LinearSVC using the pipeline. Grid search the C in the LinearSVC together with the n_gram_range (try (1,1), (1, 2), (2, 2))
# %load solutions/text_pipeline.py
10 - Working With Text Data.ipynb
amueller/sklearn_workshop
bsd-2-clause
We can also plot the spectral fluxes of energy.
ebud = [ m.get_diagnostic('APEgenspec').sum(axis=0), m.get_diagnostic('APEflux').sum(axis=0), m.get_diagnostic('KEflux').sum(axis=0), -m.rek*m.del2*m.get_diagnostic('KEspec')[1].sum(axis=0)*m.M**2 ] ebud.append(-np.vstack(ebud).sum(axis=0)) ebud_labels = ['APE gen','APE flux','KE flux','Diss.','Resid.'] [plt.semilogx(m.kk, term) for term in ebud] plt.legend(ebud_labels, loc='upper right') plt.xlim([m.kk.min(), m.kk.max()]) plt.xlabel(r'k (m$^{-1}$)'); plt.grid() plt.title('Spectral Energy Transfers');
docs/examples/two-layer.ipynb
rabernat/pyqg
mit
For this notebook we will create a toy "Boron nitride" tight binding:
# First, we create the geometry BN = sisl.geom.graphene(atoms=["B", "N"]) # Create a hamiltonian with different on-site terms H = sisl.Hamiltonian(BN) H[0, 0] = 2 H[1, 1] = -2 H[0, 1] = -2.7 H[1, 0] = -2.7 H[0, 1, (-1, 0)] = -2.7 H[0, 1, (0, -1)] = -2.7 H[1, 0, (1, 0)] = -2.7 H[1, 0, (0, 1)] = -2.7
docs/visualization/viz_module/showcase/FatbandsPlot.ipynb
zerothi/sisl
mpl-2.0
Note that we could have obtained this hamiltonian from any other source. Then we generate a path for the band structure:
band = sisl.BandStructure(H, [[0., 0.], [2./3, 1./3], [1./2, 1./2], [1., 1.]], 301, [r'$\Gamma$', 'K', 'M', r'$\Gamma$'])
docs/visualization/viz_module/showcase/FatbandsPlot.ipynb
zerothi/sisl
mpl-2.0
And finally we just ask for the fatbands plot:
fatbands = band.plot.fatbands() fatbands
docs/visualization/viz_module/showcase/FatbandsPlot.ipynb
zerothi/sisl
mpl-2.0
We only see the bands here, but this is a fatbands plot, and it is ready to accept your requests on what to draw! Requesting specific weights The fatbands that the plot draws are controlled by the groups setting.
print(fatbands.get_param("groups").help)
docs/visualization/viz_module/showcase/FatbandsPlot.ipynb
zerothi/sisl
mpl-2.0
This setting works exactly like the requests setting in PdosPlot, which is documented here. Therefore we won't give an extended description of it, but just quickly show that you can autogenerate the groups:
fatbands.split_groups(on="species")
docs/visualization/viz_module/showcase/FatbandsPlot.ipynb
zerothi/sisl
mpl-2.0
Or write them yourself if you want the maximum flexibility:
fatbands.update_settings(groups=[ {"species": "N", "color": "blue", "name": "Nitrogen"}, {"species": "B", "color": "red", "name": "Boron"} ])
docs/visualization/viz_module/showcase/FatbandsPlot.ipynb
zerothi/sisl
mpl-2.0
Scaling fatbands The visual appeal of fatbands depends a lot on the size of your plot, therefore there's one global scale setting that scales all fatbands at the same time:
fatbands.update_settings(scale=2)
docs/visualization/viz_module/showcase/FatbandsPlot.ipynb
zerothi/sisl
mpl-2.0
You can also use the scale_fatbands method, which additionally lets you choose if you want to rescale from the current size or just set the value of scale:
fatbands.scale_fatbands(0.5, from_current=True)
docs/visualization/viz_module/showcase/FatbandsPlot.ipynb
zerothi/sisl
mpl-2.0
Use BandsPlot settings All settings of BandsPlot work as well for FatbandsPlot. Even spin texture! We hope you enjoyed what you learned! This next cell is just to create the thumbnail for the notebook in the docs
thumbnail_plot = fatbands if thumbnail_plot: thumbnail_plot.show("png")
docs/visualization/viz_module/showcase/FatbandsPlot.ipynb
zerothi/sisl
mpl-2.0
<img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%"> Problem 1 The first problem involves normalizing the features for your training and test data. Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9. Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255. Min-Max Scaling: $ X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}} $ If you're having trouble solving problem 1, you can view the solution here.
# Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): """ Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data """ # TODO: Implement Min-Max scaling for grayscale image data x = image_data xmin = 0 xmax = 255 a = 0.1 b = 0.9 return a + ( ( (x - xmin)*(b - a) )/( xmax - xmin ) ) ### DON'T MODIFY ANYTHING BELOW ### # Test Cases np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])), [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314, 0.125098039216, 0.128235294118, 0.13137254902, 0.9], decimal=3) np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])), [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078, 0.896862745098, 0.9]) if not is_features_normal: train_features = normalize_grayscale(train_features) test_features = normalize_grayscale(test_features) is_features_normal = True print('Tests Passed!') if not is_labels_encod: # Turn labels into numbers and apply One-Hot Encoding encoder = LabelBinarizer() encoder.fit(train_labels) train_labels = encoder.transform(train_labels) test_labels = encoder.transform(test_labels) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 train_labels = train_labels.astype(np.float32) test_labels = test_labels.astype(np.float32) is_labels_encod = True print('Labels One-Hot Encoded') assert is_features_normal, 'You skipped the step to normalize the features' assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels' # Get randomized datasets for training and validation train_features, valid_features, train_labels, valid_labels = train_test_split( train_features, train_labels, test_size=0.05, random_state=832289) print('Training features and labels randomized and split.') # Save the data for easy access pickle_file = 'notMNIST.pickle' if not os.path.isfile(pickle_file): print('Saving data to pickle file...') try: with open('notMNIST.pickle', 'wb') as pfile: pickle.dump( { 'train_dataset': train_features, 'train_labels': train_labels, 'valid_dataset': valid_features, 'valid_labels': valid_labels, 'test_dataset': test_features, 'test_labels': test_labels, }, pfile, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise print('Data cached in pickle file.')
intro-to-tensorflow/intro_to_tensorflow.ipynb
rahulkgup/deep-learning-foundation
mit
<img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%"> Problem 3 Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy. Parameter configurations: Configuration 1 * Epochs: 1 * Learning Rate: * 0.8 * 0.5 * 0.1 * 0.05 * 0.01 Configuration 2 * Epochs: * 1 * 2 * 3 * 4 * 5 * Learning Rate: 0.2 The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed. If you're having trouble solving problem 3, you can view the solution here.
# Change if you have memory restrictions batch_size = 128 # TODO: Find the best parameters for each configuration epochs = 5 learning_rate = 0.02 ### DON'T MODIFY ANYTHING BELOW ### # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 # Measurements use for graphing loss and accuracy log_batch_step = 50 batches = [] loss_batch = [] train_acc_batch = [] valid_acc_batch = [] with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer and get loss _, l = session.run( [optimizer, loss], feed_dict={features: batch_features, labels: batch_labels}) # Log every 50 batches if not batch_i % log_batch_step: # Calculate Training and Validation accuracy training_accuracy = session.run(accuracy, feed_dict=train_feed_dict) validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) # Log batches previous_batch = batches[-1] if batches else 0 batches.append(log_batch_step + previous_batch) loss_batch.append(l) train_acc_batch.append(training_accuracy) valid_acc_batch.append(validation_accuracy) # Check accuracy against Validation data validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) loss_plot = plt.subplot(211) loss_plot.set_title('Loss') loss_plot.plot(batches, loss_batch, 'g') loss_plot.set_xlim([batches[0], batches[-1]]) acc_plot = plt.subplot(212) acc_plot.set_title('Accuracy') acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy') acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy') acc_plot.set_ylim([0, 1.0]) acc_plot.set_xlim([batches[0], batches[-1]]) acc_plot.legend(loc=4) plt.tight_layout() plt.show() print('Validation accuracy at {}'.format(validation_accuracy))
intro-to-tensorflow/intro_to_tensorflow.ipynb
rahulkgup/deep-learning-foundation
mit
We need to create a rotation matrix $\boldsymbol{R}$, which we will do via computation of the eigenvectors (eigenvectors are covered in the next lecture). The details are not so important here; we just need an orthogonal matrix. Computing the eigenvectors of $\boldsymbol{A}$ and checking that the eigenvectors are orthonormal:
# Compute eigenvectors to generate a set of orthonormal vector evalues, evectors = np.linalg.eig(A) # Verify that eigenvectors R[i] are orthogonal (see Lecture 8 notebook) import itertools pairs = itertools.combinations_with_replacement(range(np.size(evectors, 0)), 2) for p in pairs: e0, e1 = p[0], p[1] print("Dot product of eigenvectors vectors {}, {}: {}".format(e0, e1, evectors[:, e0].dot(evectors[:, e1])))
Lecture09.ipynb
garth-wells/IA-maths-Jupyter
mit
We have verified that the eigenvectors form an orthonormal set, and hence can be used to construct a rotation transformation matrix $\boldsymbol{R}$. For reasons that will become apparent later, we choose $\boldsymbol{R}$ to be a matrix whose rows are the eigenvectors of $\boldsymbol{A}$:
R = evectors.T
Lecture09.ipynb
garth-wells/IA-maths-Jupyter
mit
We now apply the transformation defined by $\boldsymbol{R}$ to $\boldsymbol{A}$:
Ap = (R).dot(A.dot(R.T)) print(Ap)
Lecture09.ipynb
garth-wells/IA-maths-Jupyter
mit
Note that the transformed matrix is diagonal. We will investigate this further in following lectures. We can reverse the transformation by exploiting the fact that $\boldsymbol{R}$ is an orthogonal matrix:
print((R.T).dot(Ap.dot(R)))
Lecture09.ipynb
garth-wells/IA-maths-Jupyter
mit
Introduction This is an Earth Engine <> TensorFlow demonstration notebook. Specifically, this notebook shows: Exporting training/testing data from Earth Engine in TFRecord format. Preparing the data for use in a TensorFlow model. Training and validating a simple model (Keras Sequential neural network) in TensorFlow. Making predictions on image data exported from Earth Engine in TFRecord format. Ingesting classified image data to Earth Engine in TFRecord format. Install the Earth Engine client library This only needs to be done once per notebook.
!pip install earthengine-api
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Authentication To read/write from a Google Cloud Storage bucket to which you have access, it's necessary to authenticate (as yourself). You'll also need to authenticate as yourself with Earth Engine, so that you'll have access to your scripts, assets, etc. Authenticate to Colab and Cloud Identify yourself to Google Cloud, so you have access to storage and other resources. When you run the code below, it will display a link in the output to an authentication page in your browser. Follow the link to a page that will let you grant permission to the Cloud SDK to access your resources. Copy the code from the permissions page back into this notebook and press return to complete the process. (You may need to run this again if you get a credentials error later.)
from google.colab import auth auth.authenticate_user()
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Authenticate to Earth Engine Authenticate to Earth Engine the same way you did to the Colab notebook. Specifically, run the code to display a link to a permissions page. This gives you access to your Earth Engine account. Copy the code from the Earth Engine permissions page back into the notebook and press return to complete the process.
!earthengine authenticate
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Initialize and test the software setup Test the Earth Engine installation
# Import the Earth Engine API and initialize it. import ee ee.Initialize() # Test the earthengine command by getting help on upload. !earthengine upload image -h
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Test the TensorFlow installation The default public runtime already has the tensorflow libraries we need installed. Before any operations from the TensorFlow API are used, import TensorFlow and enable eager execution. This provides an imperative interface that can help with debugging. See the TensorFlow eager execution guide or the tf.enable_eager_execution() docs for details.
import tensorflow as tf tf.enable_eager_execution() print(tf.__version__)
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Test the Folium installation The default public runtime already has the Folium library we will use for visualization. Import the library, check the version, and define the URL where Folium will look for Earth Engine generated map tiles.
import folium print(folium.__version__) # Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}'
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Get Training and Testing data from Earth Engine To get data for a classification model of three classes (bare, vegetation, water), we need labels and the value of predictor variables for each labeled example. We've already generated some labels in Earth Engine. Specifically, these are visually interpreted points labeled "bare," "vegetation," or "water" for a very simple classification demo (Code Editor script). For predictor variables, we'll use Landsat 8 surface reflectance imagery, bands 2-7. Prepare Landsat 8 imagery First, make a cloud-masked median composite of Landsat 8 surface reflectance imagery from 2018. Check the composite by visualizing with folium.
# Use these bands for prediction. bands = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7'] # Use Landsat 8 surface reflectance data. l8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') # Cloud masking function. def maskL8sr(image): cloudShadowBitMask = ee.Number(2).pow(3).int() cloudsBitMask = ee.Number(2).pow(5).int() qa = image.select('pixel_qa') mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And( qa.bitwiseAnd(cloudsBitMask).eq(0)) return image.updateMask(mask).select(bands).divide(10000) # The image input data is a 2018 cloud-masked median composite. image = l8sr.filterDate('2018-01-01', '2018-12-31').map(maskL8sr).median() # Use folium to visualize the imagery. mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3}) map = folium.Map(location=[38., -122.5]) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name='median composite', ).add_to(map) map.add_child(folium.LayerControl()) map
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Add pixel values of the composite to labeled points. Some training labels have already been collected for you. Load the labeled points from an existing Earth Engine asset. Each point in this table has a property called landcover that stores the label, encoded as an integer. Here we overlay the points on imagery to get predictor variables along with labels.
# Change the following two lines to use your own training data. labels = ee.FeatureCollection('projects/google/demo_landcover_labels') label = 'landcover' # Sample the image at the points and add a random column. sample = image.sampleRegions( collection=labels, properties=[label], scale=30).randomColumn() # Partition the sample approximately 70-30. training = sample.filter(ee.Filter.lt('random', 0.7)) testing = sample.filter(ee.Filter.gte('random', 0.7)) from pprint import pprint # Print the first couple points to verify. pprint({'training': training.first().getInfo()}) pprint({'testing': testing.first().getInfo()})
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Export the training and testing data Now that there's training and testing data in Earth Engine and you've inspected a couple examples to ensure that the information you need is present, it's time to materialize the datasets in a place where the TensorFlow model has access to them. You can do that by exporting the training and testing datasets to tables in TFRecord format (learn more about TFRecord format) in a Cloud Storage bucket (learn more about creating Cloud Storage buckets). Note that you need to have write access to the Cloud Storage bucket where the files will be output.
# REPLACE WITH YOUR BUCKET! outputBucket = 'ee-docs-demos' # Make sure the bucket exists. print('Found Cloud Storage bucket.' if tf.gfile.Exists('gs://' + outputBucket) else 'Output Cloud Storage bucket does not exist.')
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Once you've verified the existence of the intended output bucket, run the exports.
# Names for output files. trainFilePrefix = 'Training_demo_' testFilePrefix = 'Testing_demo_' # This is list of all the properties we want to export. featureNames = list(bands) featureNames.append(label) # Create the tasks. trainingTask = ee.batch.Export.table.toCloudStorage( collection=training, description='Training Export', fileNamePrefix=trainFilePrefix, bucket=outputBucket, fileFormat='TFRecord', selectors=featureNames) testingTask = ee.batch.Export.table.toCloudStorage( collection=testing, description='Testing Export', fileNamePrefix=testFilePrefix, bucket=outputBucket, fileFormat='TFRecord', selectors=featureNames) # Start the tasks. trainingTask.start() testingTask.start()
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Monitor task progress You can see all your Earth Engine tasks by listing them. It's also useful to repeatedly poll a task so you know when it's done. Here we can do that because this is a relatively quick export. Be careful when doing this with large exports because it will block the notebook from running other cells until this one completes.
# Print all tasks. print(ee.batch.Task.list()) # Poll the training task until it's done. import time while trainingTask.active(): print('Polling for task (id: {}).'.format(trainingTask.id)) time.sleep(5) print('Done with training export.')
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Check existence of the exported files If you've seen the status of the export tasks change to COMPLETED, then check for the existince of the files in the output Cloud Storage bucket.
fileNameSuffix = 'ee_export.tfrecord.gz' trainFilePath = 'gs://' + outputBucket + '/' + trainFilePrefix + fileNameSuffix testFilePath = 'gs://' + outputBucket + '/' + testFilePrefix + fileNameSuffix print('Found training file.' if tf.gfile.Exists(trainFilePath) else 'No training file found.') print('Found testing file.' if tf.gfile.Exists(testFilePath) else 'No testing file found.')
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Export the imagery You can also export imagery using TFRecord format. Specifically, export whatever imagery you want to be classified by the trained model into the output Cloud Storage bucket.
imageFilePrefix = 'Image_pixel_demo_' # Specify patch and file dimensions. imageExportFormatOptions = { 'patchDimensions': [256, 256], 'maxFileSize': 104857600, 'compressed': True } # Export imagery in this region. exportRegion = ee.Geometry.Rectangle([-122.7, 37.3, -121.8, 38.00]) # Setup the task. imageTask = ee.batch.Export.image.toCloudStorage( image=image, description='Image Export', fileNamePrefix=imageFilePrefix, bucket=outputBucket, scale=30, fileFormat='TFRecord', region=exportRegion.toGeoJSON()['coordinates'], formatOptions=imageExportFormatOptions, ) # Start the task. imageTask.start()
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Monitor task progress Before making predictions, we need the image export to finish, so block until it does. This might take a few minutes...
while imageTask.active(): print('Polling for task (id: {}).'.format(imageTask.id)) time.sleep(5) print('Done with image export.')
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Data preparation and pre-processing Read data from the TFRecord file into a tf.data.Dataset. Pre-process the dataset to get it into a suitable format for input to the model. Read into a tf.data.Dataset Here we are going to read a file in Cloud Storage into a tf.data.Dataset. (these TensorFlow docs explain more about reading data into a Dataset). Check that you can read examples from the file. The purpose here is to ensure that we can read from the file without an error. The actual content is not necessarily human readable.
# Create a dataset from the TFRecord file in Cloud Storage. trainDataset = tf.data.TFRecordDataset(trainFilePath, compression_type='GZIP') # Print the first record to check. print(iter(trainDataset).next())
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Define the structure of your data For parsing the exported TFRecord files, featuresDict is a mapping between feature names (recall that featureNames contains the band and label names) and float32 tf.io.FixedLenFeature objects. This mapping is necessary for telling TensorFlow how to read data in a TFRecord file into tensors. Specifically, all numeric data exported from Earth Engine is exported as float32. (Note: features in the TensorFlow context (i.e. feature.proto) are not to be confused with Earth Engine features (i.e. ee.Feature), where the former is a protocol message type for serialized data input to the model and the latter is a geometry-based geographic data structure.)
# List of fixed-length features, all of which are float32. columns = [ tf.io.FixedLenFeature(shape=[1], dtype=tf.float32) for k in featureNames ] # Dictionary with names as keys, features as values. featuresDict = dict(zip(featureNames, columns)) pprint(featuresDict)
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Parse the dataset Now we need to make a parsing function for the data in the TFRecord files. The data comes in flattened 2D arrays per record and we want to use the first part of the array for input to the model and the last element of the array as the class label. The parsing function reads data from a serialized Example proto (i.e. example.proto) into a dictionary in which the keys are the feature names and the values are the tensors storing the value of the features for that example. (Learn more about parsing Example protocol buffer messages).
def parse_tfrecord(example_proto): """The parsing function. Read a serialized example into the structure defined by featuresDict. Args: example_proto: a serialized Example. Returns: A tuple of the predictors dictionary and the label, cast to an `int32`. """ parsed_features = tf.io.parse_single_example(example_proto, featuresDict) labels = parsed_features.pop(label) return parsed_features, tf.cast(labels, tf.int32) # Map the function over the dataset. parsedDataset = trainDataset.map(parse_tfrecord, num_parallel_calls=5) # Print the first parsed record to check. pprint(iter(parsedDataset).next())
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Note that each record of the parsed dataset contains a tuple. The first element of the tuple is a dictionary with bands for keys and the numeric value of the bands for values. The second element of the tuple is a class label. Create additional features Another thing we might want to do as part of the input process is to create new features, for example NDVI, a vegetation index computed from reflectance in two spectral bands. Here are some helper functions for that.
def normalizedDifference(a, b): """Compute normalized difference of two inputs. Compute (a - b) / (a + b). If the denomenator is zero, add a small delta. Args: a: an input tensor with shape=[1] b: an input tensor with shape=[1] Returns: The normalized difference as a tensor. """ nd = (a - b) / (a + b) nd_inf = (a - b) / (a + b + 0.000001) return tf.where(tf.is_finite(nd), nd, nd_inf) def addNDVI(features, label): """Add NDVI to the dataset. Args: features: a dictionary of input tensors keyed by feature name. label: the target label Returns: A tuple of the input dictionary with an NDVI tensor added and the label. """ features['NDVI'] = normalizedDifference(features['B5'], features['B4']) return features, label
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Model setup The basic workflow for classification in TensorFlow is: Create the model. Train the model (i.e. fit()). Use the trained model for inference (i.e. predict()). Here we'll create a Sequential neural network model using Keras. This simple model is inspired by examples in: The TensorFlow Get Started tutorial The TensorFlow Keras guide The Keras Sequential model examples Note that the model used here is purely for demonstration purposes and hasn't gone through any performance tuning. Create the Keras model Before we create the model, there's still a wee bit of pre-processing to get the data into the right input shape and a format that can be used with cross-entropy loss. Specifically, Keras expects a list of inputs and a one-hot vector for the class. (See the Keras loss function docs, the TensorFlow categorical identity docs and the tf.one_hot docs for details). Here we will use a simple neural network model with a 64 node hidden layer, a dropout layer and an output layer. Once the dataset has been prepared, define the model, compile it, fit it to the training data. See the Keras Sequential model guide for more details.
from tensorflow import keras # How many classes there are in the model. nClasses = 3 # Add NDVI. inputDataset = parsedDataset.map(addNDVI) # Keras requires inputs as a tuple. Note that the inputs must be in the # right shape. Also note that to use the categorical_crossentropy loss, # the label needs to be turned into a one-hot vector. def toTuple(dict, label): return tf.transpose(list(dict.values())), tf.one_hot(indices=label, depth=nClasses) # Repeat the input dataset as many times as necessary in batches of 10. inputDataset = inputDataset.map(toTuple).repeat().batch(10) # Define the layers in the model. model = tf.keras.models.Sequential([ tf.keras.layers.Dense(64, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(nClasses, activation=tf.nn.softmax) ]) # Compile the model with the specified loss function. model.compile(optimizer=tf.train.AdamOptimizer(), loss='categorical_crossentropy', metrics=['accuracy']) # Fit the model to the training data. # Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset. model.fit(x=inputDataset, epochs=3, steps_per_epoch=100)
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Check model accuracy on the test set Now that we have a trained model, we can evaluate it using the test dataset. To do that, read and prepare the test dataset in the same way as the training dataset. Here we specify a batch sie of 1 so that each example in the test set is used exactly once to compute model accuracy. For model steps, just specify a number larger than the test dataset size (ignore the warning).
testDataset = ( tf.data.TFRecordDataset(testFilePath, compression_type='GZIP') .map(parse_tfrecord, num_parallel_calls=5) .map(addNDVI) .map(toTuple) .batch(1) ) model.evaluate(testDataset, steps=100)
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Use the trained model to classify an image from Earth Engine Now it's time to classify the image that was exported from Earth Engine. If the exported image is large, it will be split into multiple TFRecord files in its destination folder. There will also be a JSON sidecar file called "the mixer" that describes the format and georeferencing of the image. Here we will find the image files and the mixer file, getting some info out of the mixer that will be useful during model inference. Find the image files and JSON mixer file in Cloud Storage Use gsutil to locate the files of interest in the output Cloud Storage bucket. Check to make sure your image export task finished before running the following.
# Get a list of all the files in the output bucket. filesList = !gsutil ls 'gs://'{outputBucket} # Get only the files generated by the image export. exportFilesList = [s for s in filesList if imageFilePrefix in s] # Get the list of image files and the JSON mixer file. imageFilesList = [] jsonFile = None for f in exportFilesList: if f.endswith('.tfrecord.gz'): imageFilesList.append(f) elif f.endswith('.json'): jsonFile = f # Make sure the files are in the right order. imageFilesList.sort() pprint(imageFilesList) print(jsonFile)
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Read the JSON mixer file The mixer contains metadata and georeferencing information for the exported patches, each of which is in a different file. Read the mixer to get some information needed for prediction.
import json # Load the contents of the mixer file to a JSON object. jsonText = !gsutil cat {jsonFile} # Get a single string w/ newlines from the IPython.utils.text.SList mixer = json.loads(jsonText.nlstr) pprint(mixer)
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Read the image files into a dataset You can feed the list of files (imageFilesList) directly to the TFRecordDataset constructor to make a combined dataset on which to perform inference. The input needs to be preprocessed differently than the training and testing. Mainly, this is because the pixels are written into records as patches, we need to read the patches in as one big tensor (one patch for each band), then flatten them into lots of little tensors.
# Get relevant info from the JSON mixer file. PATCH_WIDTH = mixer['patchDimensions'][0] PATCH_HEIGHT = mixer['patchDimensions'][1] PATCHES = mixer['totalPatches'] PATCH_DIMENSIONS_FLAT = [PATCH_WIDTH * PATCH_HEIGHT, 1] # Note that the tensors are in the shape of a patch, one patch for each band. imageColumns = [ tf.FixedLenFeature(shape=PATCH_DIMENSIONS_FLAT, dtype=tf.float32) for k in bands ] # Parsing dictionary. imageFeaturesDict = dict(zip(bands, imageColumns)) # Note that you can make one dataset from many files by specifying a list. imageDataset = tf.data.TFRecordDataset(imageFilesList, compression_type='GZIP') # Parsing function. def parse_image(example_proto): return tf.parse_single_example(example_proto, imageFeaturesDict) # Parse the data into tensors, one long tensor per patch. imageDataset = imageDataset.map(parse_image, num_parallel_calls=5) # Break our long tensors into many little ones. imageDataset = imageDataset.flat_map( lambda features: tf.data.Dataset.from_tensor_slices(features) ) # Add additional features (NDVI). imageDataset = imageDataset.map( # Add NDVI to a feature that doesn't have a label. lambda features: addNDVI(features, None)[0] ) # Turn the dictionary in each record into a tuple with a dummy label. imageDataset = imageDataset.map( # Add a dummy target (-1), with a value that is obviously ridiculous. # This is because the model expects a tuple of (inputs, label). lambda dataDict: (tf.transpose(list(dataDict.values())), tf.constant(-1)) ) # Turn each patch into a batch. imageDataset = imageDataset.batch(PATCH_WIDTH * PATCH_HEIGHT)
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Generate predictions for the image pixels To get predictions in each pixel, run the image dataset through the trained model using model.predict(). Print the first prediction to see that the output is a list of the three class probabilities for each pixel. Running all predictions might take a while.
# Run prediction in batches, with as many steps as there are patches. predictions = model.predict(imageDataset, steps=PATCHES, verbose=1) # Note that the predictions come as a numpy array. Check the first one. print(predictions[0])
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Write the predictions to a TFRecord file Now that there's a list of class probabilities in predictions, it's time to write them back into a file, optionally including a class label which is simply the index of the maximum probability. We'll write directly from TensorFlow to a file in the output Cloud Storage bucket. Iterate over the list, compute class label and write the class and the probabilities in patches. Specifically, we need to write the pixels into the file as patches in the same order they came out. The records are written as serialized tf.train.Example protos. This might take a while.
outputImageFile = 'gs://' + outputBucket + '/Classified_pixel_demo.TFRecord' print('Writing to file ' + outputImageFile) # Instantiate the writer. writer = tf.python_io.TFRecordWriter(outputImageFile) # Every patch-worth of predictions we'll dump an example into the output # file with a single feature that holds our predictions. Since our predictions # are already in the order of the exported data, the patches we create here # will also be in the right order. patch = [[], [], [], []] curPatch = 1 for prediction in predictions: patch[0].append(tf.argmax(prediction, 1)) patch[1].append(prediction[0][0]) patch[2].append(prediction[0][1]) patch[3].append(prediction[0][2]) # Once we've seen a patches-worth of class_ids... if (len(patch[0]) == PATCH_WIDTH * PATCH_HEIGHT): print('Done with patch ' + str(curPatch) + ' of ' + str(PATCHES) + '...') # Create an example example = tf.train.Example( features=tf.train.Features( feature={ 'prediction': tf.train.Feature( int64_list=tf.train.Int64List( value=patch[0])), 'bareProb': tf.train.Feature( float_list=tf.train.FloatList( value=patch[1])), 'vegProb': tf.train.Feature( float_list=tf.train.FloatList( value=patch[2])), 'waterProb': tf.train.Feature( float_list=tf.train.FloatList( value=patch[3])), } ) ) # Write the example to the file and clear our patch array so it's ready for # another batch of class ids writer.write(example.SerializeToString()) patch = [[], [], [], []] curPatch += 1 writer.close()
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Upload the classifications to an Earth Engine asset Verify the existence of the predictions file At this stage, there should be a predictions TFRecord file sitting in the output Cloud Storage bucket. Use the gsutil command to verify that the predictions image (and associated mixer JSON) exist and have non-zero size.
!gsutil ls -l {outputImageFile}
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Upload the classified image to Earth Engine Upload the image to Earth Engine directly from the Cloud Storage bucket with the earthengine command. Provide both the image TFRecord file and the JSON file as arguments to earthengine upload.
# REPLACE WITH YOUR USERNAME: USER_NAME = 'nclinton' outputAssetID = 'users/' + USER_NAME + '/Classified_pixel_demo' print('Writing to ' + outputAssetID) # Start the upload. !earthengine upload image --asset_id={outputAssetID} {outputImageFile} {jsonFile}
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
View the ingested asset Display the vector of class probabilities as an RGB image with colors corresponding to the probability of bare, vegetation, water in a pixel. Also display the winning class using the same color palette.
predictionsImage = ee.Image(outputAssetID) predictionVis = { 'bands': 'prediction', 'min': 0, 'max': 2, 'palette': ['red', 'green', 'blue'] } probabilityVis = {'bands': ['bareProb', 'vegProb', 'waterProb']} predictionMapid = predictionsImage.getMapId(predictionVis) probabilityMapid = predictionsImage.getMapId(probabilityVis) map = folium.Map(location=[38., -122.5]) folium.TileLayer( tiles=EE_TILES.format(**predictionMapid), attr='Google Earth Engine', overlay=True, name='prediction', ).add_to(map) folium.TileLayer( tiles=EE_TILES.format(**probabilityMapid), attr='Google Earth Engine', overlay=True, name='probability', ).add_to(map) map.add_child(folium.LayerControl()) map
python/examples/ipynb/TF_demo1_keras.ipynb
tylere/earthengine-api
apache-2.0
Data
url = "https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/cherry_blossoms.csv" cherry_blossoms = pd.read_csv(url, sep=";") df = cherry_blossoms display(df.sample(n=5, random_state=1)) display(df.describe()) df2 = df[df.doy.notna()] # complete cases on doy (day of year) x = df2.year.values.astype(float) y = df2.doy.values.astype(float) xlabel = "year" ylabel = "doy"
notebooks/misc/splines_numpyro.ipynb
probml/pyprobml
mit
B-splines
def make_splines(x, num_knots, degree=3): knot_list = jnp.quantile(x, q=jnp.linspace(0, 1, num=num_knots)) knots = jnp.pad(knot_list, (3, 3), mode="edge") B = BSpline(knots, jnp.identity(num_knots + 2), k=degree)(x) return B def plot_basis(x, B, w=None): if w is None: w = jnp.ones((B.shape[1])) fig, ax = plt.subplots() ax.set_xlim(np.min(x), np.max(x)) ax.set_xlabel(xlabel) ax.set_ylabel("basis value") for i in range(B.shape[1]): ax.plot(x, (w[i] * B[:, i]), "k", alpha=0.5) return ax nknots = 15 B = make_splines(x, nknots) ax = plot_basis(x, B) plt.savefig(f"splines_basis_{nknots}_{ylabel}.pdf", dpi=300) num_knots = 15 degree = 3 knot_list = jnp.quantile(x, q=jnp.linspace(0, 1, num=num_knots)) print(knot_list) print(knot_list.shape) knots = jnp.pad(knot_list, (3, 3), mode="edge") print(knots) print(knots.shape) B = BSpline(knots, jnp.identity(num_knots + 2), k=degree)(x) print(B.shape) def plot_basis_with_vertical_line(x, B, xstar): ax = plot_basis(x, B) num_knots = B.shape[1] ndx = np.where(x == xstar)[0][0] for i in range(num_knots): yy = B[ndx, i] if yy > 0: ax.scatter(xstar, yy, s=40) ax.axvline(x=xstar) return ax plot_basis_with_vertical_line(x, B, 1200) plt.savefig(f"splines_basis_{nknots}_vertical_{ylabel}.pdf", dpi=300) def model(B, y, offset=100): a = numpyro.sample("a", dist.Normal(offset, 10)) w = numpyro.sample("w", dist.Normal(0, 10).expand(B.shape[1:])) sigma = numpyro.sample("sigma", dist.Exponential(1)) mu = numpyro.deterministic("mu", a + B @ w) # mu = numpyro.deterministic("mu", a + jnp.sum(B * w, axis=-1)) # equivalent numpyro.sample("y", dist.Normal(mu, sigma), obs=y) def fit_model(B, y, offset=100): start = {"w": jnp.zeros(B.shape[1])} guide = AutoLaplaceApproximation(model, init_loc_fn=init_to_value(values=start)) svi = SVI(model, guide, optim.Adam(1), Trace_ELBO(), B=B, y=y, offset=offset) params, losses = svi.run(random.PRNGKey(0), 20000) # needs 20k iterations post = guide.sample_posterior(random.PRNGKey(1), params, (1000,)) return post post = fit_model(B, y) w = jnp.mean(post["w"], 0) plot_basis(x, B, w) plt.savefig(f"splines_basis_weighted_{nknots}_{ylabel}.pdf", dpi=300) def plot_post_pred(post, x, y): mu = post["mu"] mu_PI = jnp.percentile(mu, q=(1.5, 98.5), axis=0) plt.figure() plt.scatter(x, y) plt.fill_between(x, mu_PI[0], mu_PI[1], color="k", alpha=0.5) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.show() plot_post_pred(post, x, y) plt.savefig(f"splines_post_pred_{nknots}_{ylabel}.pdf", dpi=300) a = jnp.mean(post["a"], 0) w = jnp.mean(post["w"], 0) mu = a + B @ w def plot_pred(mu, x, y): plt.figure() plt.scatter(x, y, alpha=0.5) plt.plot(x, mu, "k-", linewidth=4) plt.xlabel(xlabel) plt.ylabel(ylabel) plot_pred(mu, x, y) plt.savefig(f"splines_point_pred_{nknots}_{ylabel}.pdf", dpi=300)
notebooks/misc/splines_numpyro.ipynb
probml/pyprobml
mit
Repeat with temperature as target variable
df2 = df[df.temp.notna()] # complete cases x = df2.year.values.astype(float) y = df2.temp.values.astype(float) xlabel = "year" ylabel = "temp" nknots = 15 B = make_splines(x, nknots) plot_basis_with_vertical_line(x, B, 1200) plt.savefig(f"splines_basis_{nknots}_vertical_{ylabel}.pdf", dpi=300) post = fit_model(B, y, offset=6) w = jnp.mean(post["w"], 0) plot_basis(x, B, w) plt.savefig(f"splines_basis_weighted_{nknots}_{ylabel}.pdf", dpi=300) plot_post_pred(post, x, y) plt.savefig(f"splines_post_pred_{nknots}_{ylabel}.pdf", dpi=300) a = jnp.mean(post["a"], 0) w = jnp.mean(post["w"], 0) mu = a + B @ w plot_pred(mu, x, y) plt.savefig(f"splines_point_pred_{nknots}_{ylabel}.pdf", dpi=300)
notebooks/misc/splines_numpyro.ipynb
probml/pyprobml
mit
Maximum likelihood estimation
from sklearn.linear_model import LinearRegression, Ridge # reg = LinearRegression().fit(B, y) reg = Ridge().fit(B, y) w = reg.coef_ a = reg.intercept_ print(w) print(a) mu = a + B @ w plot_pred(mu, x, y) plt.savefig(f"splines_MLE_{nknots}_{ylabel}.pdf", dpi=300)
notebooks/misc/splines_numpyro.ipynb
probml/pyprobml
mit
Enter your details for twitter API
# get access to the twitter API APP_KEY = 'fQCYxyQmFDUE6aty0JEhDoZj7' APP_SECRET = 'ZwVIgnWMpuEEVd1Tlg6TWMuyRwd3k90W3oWyLR2Ek1tnjnRvEG' OAUTH_TOKEN = '824520596293820419-f4uGwMV6O7PSWUvbPQYGpsz5fMSVMct' OAUTH_TOKEN_SECRET = '1wq51Im5HQDoSM0Fb5OzAttoP3otToJtRFeltg68B8krh'
Lesson 14/Lesson 14 - Assignment.ipynb
jornvdent/WUR-Geo-Scripting-Course
gpl-3.0