markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
2. Prepare test data- Download test data: PhaseNet picks of the 2019 Ridgecrest earthquake sequence1. picks file: picks.json2. station information: stations.csv3. events in SCSN catalog: events.csv4. config file: config.pkl```bashwget https://github.com/wayneweiqiang/GMMA/releases/download/test_data/test_data.zipunzip test_data.zip``` | !wget https://github.com/wayneweiqiang/GMMA/releases/download/test_data/test_data.zip
!unzip test_data.zip
data_dir = lambda x: os.path.join("test_data", x)
station_csv = data_dir("stations.csv")
pick_json = data_dir("picks.json")
catalog_csv = data_dir("catalog_gamma.csv")
picks_csv = data_dir("picks_gamma.csv")
if not os.path.exists("figures"):
os.makedirs("figures")
figure_dir = lambda x: os.path.join("figures", x)
## set config
config = {'xlim_degree': [-118.004, -117.004],
'ylim_degree': [35.205, 36.205],
'z(km)': [0, 41]}
## read stations
stations = pd.read_csv(station_csv, delimiter="\t")
stations = stations.rename(columns={"station":"id"})
stations_json = json.loads(stations.to_json(orient="records"))
## read picks
picks = pd.read_json(pick_json).iloc[:500]
picks["timestamp"] = picks["timestamp"].apply(lambda x: x.strftime("%Y-%m-%dT%H:%M:%S.%f")[:-3])
picks_json = json.loads(picks.to_json(orient="records"))
## run association
result = requests.post(f"{GAMMA_API_URL}/predict", json= {
"picks":picks_json,
"stations":stations_json,
"config": config
})
result = result.json()
catalog_gamma = json.loads(result["catalog"])
picks_gamma = json.loads(result["picks"])
## show result
print("GaMMA catalog:")
display(pd.DataFrame(catalog_gamma)[["time", "latitude", "longitude", "depth(m)", "magnitude", "covariance"]])
print("GaMMA association:")
display(pd.DataFrame(picks_gamma)) | GaMMA catalog:
| MIT | docs/example_interactive.ipynb | wayneweiqiang/GMMA |
μ€μ²© 쑰건문 nested conditionalμ°μ§ μλ κ²μ΄ μ’μif λΈλμμ λ λ€λ₯Έ if λΈλμ΄ μλ κ² | # μ€μ²© 쑰건문 νμ© μ€μ΅
info = input('input your name, phone number, address, sex: ')
info_list = info.split(', ')
if info_list[0][0] == 'λ°':
if info_list[1][0:3] == '010':
if info_list[2] == 'μμΈ':
if info_list[3] == 'λ¨μ±':
print('μ°λ¦¬κ° μ°Ύλ μ¬λμ
λλ€.')
else:
print('μ±λ³μ΄ λ€λ¦
λλ€.')
else:
print('μ£Όμμ§κ° λ€λ¦
λλ€.')
else:
print('μ νλ²νΈκ° λ€λ¦
λλ€.')
else:
print('μ±μ΄ λ€λ¦
λλ€.') | input your name, phone number, address, sex: λ°μ°¬νΈ, 01011234567, μμΈ, λ¨μ±
μ°λ¦¬κ° μ°Ύλ μ¬λμ
λλ€.
| MIT | week_03.ipynb | HUFS-Programming-2022/JongbeenSong_202001862 |
λ
Όλ¦¬μ°μ°μλΉκ΅μ°μ°μκ° μ¬λ¬λ² μ¬μ©λ λ νμ©ν¨a < 0 < bμ κ°μ ννλ νμ΄μ¬μμλ§ μ¬μ© κ°λ₯ | a = True
if a == True:
print('μ΄λ κ² μ°μ§ λ§κ². νλ¦° νν')
if a:
print('μ΄λ κ² μ¨μΌλ§ ν¨.')
fruit = ['banana', 'apple', 'pear', 'berry']
answer = input('what is your favorite fruit?: ')
if answer in fruit:
print('we have your favorite food!')
else:
print('we do not have your favorite food!')
option = input('would you like to add your favorite food? [y/n]: ')
if option == 'y':
fruit.append(answer)
print(f'now we have {fruit} in our list') | what is your favorite fruit?: strawberry
we do not have your favorite food!
would you like to add your favorite food? [y/n]: y
now we have ['banana', 'apple', 'pear', 'berry', 'strawberry'] in our list
| MIT | week_03.ipynb | HUFS-Programming-2022/JongbeenSong_202001862 |
λ°λ€ μ½λΌλ¦¬ μ°μ°μλμ
μ°μ°μ + ννμμ λ§λ€μ΄λλ°°μ΄ λ€λ₯Έ λ΄μ©λ€κ³Ό λ¬λ¦¬ μ λ λ§μ΄ μ¨λ³΄μ§ μμ μΈλΆ μλ£λ€μ 보며 μ‘°κΈ λ μμΈν 곡λΆνμκ³ , λ¨μ νμ©ν μ€μ΅μ΄ μλλΌ κ°λ
μ μΈ λΆλΆλ νκΈ°νμμ΅λλ€. | # νμ΄μ¬μ κΈ°λ³Έ μ΄λ
μ ν μ€μ νλμ μλ―Έλ§ λ΄κ²¨μΌλ§ ν¨.
"""
print(student = 'μ² μ') << μ€λ₯κ° λ°μνκ² λ¨.
λμ μ,
student = 'μ² μ'
print(student) << μ΄λ° μμΌλ‘ μμ±νκ±°λ, λ°λ€ μ½λΌλ¦¬ μ°μ°μλ₯Ό νμ©ν΄μΌν¨.
"""
print(student := 'μ² μ')
while s := input('input: '):
if s == 'quit':
break
else:
print('output: ' + s)
print('program ended') | input: hello
output: hello
input: quit
program ended
| MIT | week_03.ipynb | HUFS-Programming-2022/JongbeenSong_202001862 |
Stringλ¬Έμμ΄ | # !pip install nltk => ν°λ―Έλλ‘ ν¨ν€μ§ μ€μΉνλ μ½λ
import nltk
nltk.download('book', quiet=True)
from nltk import book | *** Introductory Examples for the NLTK Book ***
Loading text1, ..., text9 and sent1, ..., sent9
Type the name of the text or sentence to view it.
Type: 'texts()' or 'sents()' to list the materials.
text1: Moby Dick by Herman Melville 1851
text2: Sense and Sensibility by Jane Austen 1811
text3: The Book of Genesis
text4: Inaugural Address Corpus
text5: Chat Corpus
text6: Monty Python and the Holy Grail
text7: Wall Street Journal
text8: Personals Corpus
text9: The Man Who Was Thursday by G . K . Chesterton 1908
| MIT | week_03.ipynb | HUFS-Programming-2022/JongbeenSong_202001862 |
String λ° nltk μ€μ΅ | genesis = book.text3
genesis_tokens = genesis.tokens
len(genesis_tokens) | _____no_output_____ | MIT | week_03.ipynb | HUFS-Programming-2022/JongbeenSong_202001862 |
μΆκ° κ°μΈ μ€μ΅ | from wordcloud import WordCloud
import matplotlib.pyplot as plt
from collections import Counter
from PIL import Image
import numpy as np
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
stop_words = set(stopwords.words('english'))
filtered_text = [w for w in genesis_tokens if not w.lower() in stop_words]
filtered_text = []
for w in genesis_tokens:
if w not in stop_words:
filtered_text.append(w)
count = Counter(filtered_text)
wc = WordCloud(width=400, height=400, scale=2.0, max_font_size=250)
generated_image = wc.generate_from_frequencies(count)
plt.figure()
plt.imshow(generated_image) | _____no_output_____ | MIT | week_03.ipynb | HUFS-Programming-2022/JongbeenSong_202001862 |
Quiz λ΅μ | thursday = book.text9
print(len(set(thursday.tokens)) / len(thursday.tokens))
monty = book.text6
sorted(set(monty.tokens), reverse=True)[:10]
reversed_token = sorted(set(monty.tokens), reverse=True)
reversed_processed_token = []
for token in reversed_token:
if 'z' in token:
token = token.replace('z', 'Z')
reversed_processed_token.append(token)
else:
if len(token) >=4 :
token = token[:-1] + token[-1].upper()
reversed_processed_token.append(token)
print(reversed_processed_token)
info = input('input ID, phone number, email address: ')
info_list = info.split()
ID = info_list[0]
raw_phone_nvm = info_list[1]
email_id = info_list[2]
if ID[6] == ('1' or '2'):
if ID[:2] == '00':
b_year = '2000'
else:
b_year = '19' + ID[:2]
elif ID[6] == ('3' or '4'):
b_year = '20' + ID[:2]
b_month = ID[2:4]
b_date = ID[4:6]
if ID[6] == ('1' or '3'):
gender = 'λ¨μ±'
elif ID[6] == ('2' or '4'):
gender = 'μ¬μ±'
phone_nvm = raw_phone_nvm[:3] + '-' + raw_phone_nvm[3:7] + '-' + raw_phone_nvm[7:]
email_address = email_id + '@gmail.com'
print(f'λΉμ μ {b_year}λ
{b_month}μ {b_date}μΌ μΆμμ {gender}μ
λλ€.')
print(f'λΉμ μ μ νλ²νΈλ {phone_nvm}μ
λλ€.')
print(f'λΉμ μ μ΄λ©μΌμ£Όμλ {email_address}μ
λλ€.') | λΉμ μ μ νλ²νΈλ 010-1234-5678μ
λλ€.
λΉμ μ μ΄λ©μΌμ£Όμλ [email protected]μ
λλ€.
| MIT | week_03.ipynb | HUFS-Programming-2022/JongbeenSong_202001862 |
Load necessary modules | # show images inline
%matplotlib inline
# automatically reload modules when they have changed
%load_ext autoreload
%autoreload 2
import os
os.environ['CUDA_VISIBLE_DEVICES'] = str(1)
# import keras
import keras
# import keras_retinanet
from keras_retinanet import models
from keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image
from keras_retinanet.utils.visualization import draw_box, draw_caption
from keras_retinanet.utils.colors import label_color
#from keras_retinanet.utils.gpu import setup_gpu
# import miscellaneous modules
import matplotlib.pyplot as plt
import cv2
import os
import numpy as np
import time
# set tf backend to allow memory to grow, instead of claiming everything
import tensorflow as tf
# use this to change which GPU to use
#gpu = 1
# set the modified tf session as backend in keras
#setup_gpu(gpu)
from keras_retinanet import models
# adjust this to point to your downloaded/trained model
# models can be downloaded here: https://github.com/fizyr/keras-retinanet/releases
model_path = os.path.join('..', 'snapshots', 'resnet152_pascal_02_backup.h5')
dataset_path = "/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/retina_net_video/output/"
# load retinanet model
model = models.load_model(model_path, backbone_name='resnet152')
model = models.convert_model(model) | Using TensorFlow backend.
| MIT | keras_retinanet/examples/ResNet50RetinaNetcustom.ipynb | MarviB16/CVSP-Object-Detection-Historical-Videos |
Load RetinaNet model | # load label to names mapping for visualization purposes
labels_to_names = {0: 'crowd', 1: 'civilian', 2: 'soldier', 3: 'civil vehicle', 4: 'mv'} | _____no_output_____ | MIT | keras_retinanet/examples/ResNet50RetinaNetcustom.ipynb | MarviB16/CVSP-Object-Detection-Historical-Videos |
Run detection on example | for filename in os.listdir(dataset_path):
image = None
if filename.endswith('.jpg'):
# Open the file:
image = cv2.imread(os.path.join(dataset_path,filename))
if image is not None:
# copy to draw on
draw = image.copy()
draw_regression = image.copy()
draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB)
# preprocess image for network
image = preprocess_image(image)
image, scale = resize_image(image)
# process image
start = time.time()
result = model.predict_on_batch(np.expand_dims(image, axis=0))
boxes, scores, labels = result
print("processing time: ", time.time() - start)
# correct for image scale
boxes /= scale
# visualize detections
for box, score, label in zip(boxes[0], scores[0], labels[0]):
# scores are sorted so we can break
if score < 0.5:
break
print (box, label, score)
color = label_color(label)
b = box.astype(int)
draw_box(draw, b, color=color)
caption = "{} {:.3f}".format(labels_to_names[label], score)
draw_caption(draw, b, caption)
cv2.imwrite(os.path.join(dataset_path,"detected_"+filename), draw)
#plt.figure(figsize=(17, 17))
#plt.axis('off')
#plt.imshow(draw)
#plt.savefig('/caa/Homes01/mburges/CVSP-Object-Detection-Historical-Videos/result.png')
#plt.show() | processing time: 14.983539581298828
processing time: 0.12204432487487793
processing time: 0.09300637245178223
[607.5637 225.40349 737.20013 603.96277] 1 0.88375086
processing time: 0.09206128120422363
[486.1151 155.2592 717.5609 624.07947] 1 0.52050894
processing time: 0.09435248374938965
processing time: 0.09000372886657715
processing time: 0.0921483039855957
processing time: 0.09550046920776367
[510.8549 137.49898 804.9428 553.4199 ] 2 0.8575064
processing time: 0.09555768966674805
processing time: 0.09353828430175781
processing time: 0.09651637077331543
processing time: 0.09403705596923828
[347.28036 79.86496 527.74695 608.31683] 2 0.5910789
[347.28036 79.86496 527.74695 608.31683] 1 0.517396
processing time: 0.09273624420166016
[425.68863 119.19557 676.1325 627.0878 ] 1 0.5313673
processing time: 0.0914297103881836
[371.56665 168.74414 513.4851 540.01263] 2 0.70219386
[371.80887 168.88281 513.1104 547.34534] 1 0.53447974
processing time: 0.09416532516479492
processing time: 0.09432744979858398
processing time: 0.09474587440490723
processing time: 0.09543561935424805
processing time: 0.09648942947387695
processing time: 0.09536194801330566
processing time: 0.09453773498535156
[414.67847 263.89792 499.37177 378.30865] 1 0.9172611
processing time: 0.09071469306945801
processing time: 0.08962607383728027
[451.50555 195.46921 696.44965 679.3483 ] 1 0.5325235
processing time: 0.09146785736083984
[376.33533 167.56775 492.77762 396.34012] 1 0.7805066
processing time: 0.09313607215881348
[456.04694 156.29828 673.3232 628.95074] 1 0.7096983
processing time: 0.09243106842041016
processing time: 0.09106063842773438
processing time: 0.09378266334533691
processing time: 0.10053062438964844
[436.5713 63.5298 831.2737 684.41486] 2 0.6563158
processing time: 0.1033942699432373
processing time: 0.09522390365600586
processing time: 0.09610199928283691
[239.43962 188.80275 419.7838 668.23425] 1 0.8739916
processing time: 0.0928342342376709
processing time: 0.09429478645324707
processing time: 0.0940711498260498
[500.65585 205.6814 736.7623 589.0865 ] 2 0.9546621
processing time: 0.09396195411682129
processing time: 0.09192037582397461
processing time: 0.0907444953918457
processing time: 0.09572935104370117
processing time: 0.09575319290161133
processing time: 0.08878946304321289
processing time: 0.0968015193939209
processing time: 0.08921289443969727
processing time: 0.09622716903686523
processing time: 0.09737372398376465
processing time: 0.09994244575500488
[576.2696 298.4631 792.2819 609.51874] 1 0.8295926
| MIT | keras_retinanet/examples/ResNet50RetinaNetcustom.ipynb | MarviB16/CVSP-Object-Detection-Historical-Videos |
BAEKJOON 1021λ² λ¬Έμ - νμ νλ νhttps://www.acmicpc.net/problem/1021 λ¬Έμ μ§λ―Όμ΄λ Nκ°μ μμλ₯Ό ν¬ν¨νκ³ μλ μλ°©ν₯ μν νλ₯Ό κ°μ§κ³ μλ€. μ§λ―Όμ΄λ μ΄ νμμ λͺ κ°μ μμλ₯Ό λ½μλ΄λ €κ³ νλ€.μ§λ―Όμ΄λ μ΄ νμμ λ€μκ³Ό κ°μ 3κ°μ§ μ°μ°μ μνν μ μλ€.- 첫λ²μ§Έ μμλ₯Ό λ½μλΈλ€. μ΄ μ°μ°μ μννλ©΄, μλ νμ μμκ° a1, ..., akμ΄μλ κ²μ΄ a2, ..., akμ κ°μ΄ λλ€.- μΌμͺ½μΌλ‘ ν μΉΈ μ΄λμν¨λ€. μ΄ μ°μ°μ μννλ©΄, a1, ..., akκ° a2, ..., ak, a1μ΄ λλ€.- μ€λ₯Έμͺ½μΌλ‘ ν μΉΈ μ΄λμν¨λ€. μ΄ μ°μ°μ μννλ©΄, a1, ..., akκ° ak, a1, ..., ak-1μ΄ λλ€.νμ μ²μμ ν¬ν¨λμ΄ μλ μ Nμ΄ μ£Όμ΄μ§λ€. κ·Έλ¦¬κ³ μ§λ―Όμ΄κ° λ½μλ΄λ €κ³ νλ μμμ μμΉκ° μ£Όμ΄μ§λ€. (μ΄ μμΉλ κ°μ₯ μ²μ νμμμ μμΉμ΄λ€.) μ΄ λ, κ·Έ μμλ₯Ό μ£Όμ΄μ§ μμλλ‘ λ½μλ΄λλ° λλ 2λ², 3λ² μ°μ°μ μ΅μκ°μ μΆλ ₯νλ νλ‘κ·Έλ¨μ μμ±νμμ€. μ
λ ₯- 첫째 μ€μ νμ ν¬κΈ° Nκ³Ό λ½μλ΄λ €κ³ νλ μμ κ°μ Mμ΄ μ£Όμ΄μ§λ€. Nμ 50λ³΄λ€ μκ±°λ κ°μ μμ°μμ΄κ³ , Mμ Nλ³΄λ€ μκ±°λ κ°μ μμ°μμ΄λ€. - λμ§Έ μ€μλ μ§λ―Όμ΄κ° λ½μλ΄λ €κ³ νλ μμ μμΉκ° μμλλ‘ μ£Όμ΄μ§λ€. μμΉλ 1λ³΄λ€ ν¬κ±°λ κ°κ³ , Nλ³΄λ€ μκ±°λ κ°μ μμ°μμ΄λ€. μΆλ ₯- 첫째 μ€μ λ¬Έμ μ μ λ΅μ μΆλ ₯νλ€. μμ μ
λ ₯ 1 ```10 31 2 3``` μμ μΆλ ₯ 1```0``` νμ΄ | n, m = map(int, input().split())
goal = list(map(int, input().split()))
ls = list(range(1, n+1)) # 1λΆν° nκΉμ§μ 리μ€νΈλ₯Ό λ§λ€μ΄μ goalμ΄ λ°λ‘ ls μμμ match
count = 0
while len(goal) > 0:
if goal[0] == ls[0]:
ls.pop(0)
goal.pop(0)
elif ls.index(goal[0]) <= len(ls) / 2: # goal[0]μ μμΉκ° ls κΈΈμ΄μ λ°λ³΄λ€ μμΌλ©΄ pop(0) (2λ²μ°μ°)
ls.append(ls.pop(0))
count += 1
else: # goal[0]μ μμΉκ° ls κΈΈμ΄μ λ°λ³΄λ€ ν¬λ©΄ pop() (3λ²μ°μ°)
ls = [ls.pop()] + ls
count += 1
print(count) | _____no_output_____ | MIT | Algorithm Problems/deque_baekjoon_1021_rotating_queue.ipynb | hyeshinoh/Study_Algorithm |
Zindi - Sentiment Analysis_Tunisian Arabizi.ipynb | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style("white")
from sklearn.model_selection import train_test_split # function for splitting data to train and test sets
import re, string
import nltk
from nltk.corpus import stopwords
from nltk.classify import SklearnClassifier
#from wordcloud import WordCloud,STOPWORDS
from subprocess import check_output
df = pd.read_csv("Train.csv")
df.head()
df.shape
len(df['ID'].unique())
test = pd.read_csv("Test.csv")
test.head() | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Data Cleaning | df.head()
positive = df[df['label'] == 1]
negative = df[df['label'] == -1]
df = pd.concat([positive, negative], axis=0)
df.head(10)
df.isna().sum()
df.dropna(inplace=True)
df.isna().sum()
df.duplicated().sum()
test.isna().sum()
test.duplicated().sum() | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Explore Corpus Character Set | from nltk import FreqDist
import re
corpus_as_char_list = "".join(df.text.tolist())
print(type(corpus_as_char_list),len(corpus_as_char_list))
fdist1 = FreqDist([c for c in corpus_as_char_list])
print("number of characters:" + str(fdist1.N()))
print("number of unique characters:" + str(fdist1.B()))
print('List of distinct characters:')
print(sorted(list(fdist1.keys())))
print('The most common characters:')
fdist1.most_common(5)
fdist1.plot(20, cumulative=False)
fdist1.plot(20,cumulative=True)
corpus_chars_df = pd.DataFrame(fdist1.items())
corpus_chars_df.columns = ['character','frequency']
# Unicode number of each distinct character:
corpus_chars_df['unicode_dec']= corpus_chars_df.character.map(ord)
corpus_chars_df['unicode_hex']= corpus_chars_df.character.map(lambda x: hex(ord(x)))
corpus_chars_df = corpus_chars_df.set_index('character')
corpus_chars_df.head()
idx = corpus_chars_df.unicode_hex.str.startswith('0x60')
print(corpus_chars_df.shape[0],idx.sum())
# Characters from the Standard Arabic Character set
corpus_chars_df[idx].sort_values(by='unicode_dec', ascending=True)
# Characters from the Extended Arabic Character set
corpus_chars_df[~idx].sort_values(by='unicode_dec', ascending=True)
# Rare characters
u = corpus_chars_df[corpus_chars_df.frequency<5]
print(u.shape[0])
#print(sorted(u.index.tolist()))
print(','.join(sorted(u.index.tolist())))
# Rare characters sorted by unicode value
u.sort_values(by='unicode_dec', ascending=True).head()
u.sort_values(by='unicode_dec', ascending=False).head() | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
**Select unwanted characters**For this corpus, unwanted characters are characters in the standard Arabic character set. | idx1 = corpus_chars_df.unicode_hex.str.startswith('0x6')
idx2 = (corpus_chars_df.frequency>=5)
idx1.sum(), idx2.sum(), (idx1&idx2).sum()
unwanted_characters = sorted(corpus_chars_df.loc[~(idx1)].index.tolist())
print(len(unwanted_characters)) | 97
| MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Text Preprocessing | def clean_text(text):
'''Make text lowercase, remove text in square brackets,remove links,remove punctuation
and remove words containing numbers.'''
text = str(text).lower()
#text = re.sub('<.*?>+', '', text)
#text = re.sub("s+"," ", text)
#text = re.sub("[^-9A-Za-z ]", "" , text)
return text
def clean_text(text):
#will replace the html characters with ""
text = re.sub(r"[^A-Za-z0-9]", " ", text)
#To remove the punctuations
text = text.translate(str.maketrans(' ',' ',string.punctuation))
#will consider only alphabets and numerics
text = re.sub('[^a-zA-Z]',' ',text)
# remove numbers
# text = re.sub(r'\b\d+(?:\.\d+)?\s+', '', text)
#will replace newline with space
text = re.sub("\n"," ",text)
#will convert to lower case
text = text.lower()
# will split and join the words
text=' '.join(text.split())
return text
df['text'] = df['text'].apply(lambda x:clean_text(x))
test['text'] = test['text'].apply(lambda x:clean_text(x))
train = df.copy()
train.head(3)
train.shape | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Unwanted characters | '''unwanted_characters_regexp = '[' + ''.join(unwanted_characters) + ']'
unwanted_characters_regexp'''
'''idx = train.text.map(lambda x: re.search(unwanted_characters_regexp,x)!=None)
idx.sum()'''
'''# Words that contain Arabic letters (that will be removed)
print(train.loc[idx].text.tolist())'''
'''train[idx].head()''' | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Modelling Split data into train and test | X = train['text']
y = train['label']
# Splitting the dataset into train and test set
from sklearn.model_selection import train_test_split
seed = 12
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size = 0.10, shuffle=True, random_state=0)
X.shape, y.shape | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Logistic Regression | from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer, TfidfTransformer
from sklearn.feature_selection import SelectKBest, chi2
# Building a pipeline: We can write less code and do all of the above, by building a pipeline as follows:
# The names βvectβ , βtfidfβ and βclfβ are arbitrary but will be used later.
# We will be using the 'text_clf' going forward.
from sklearn.pipeline import Pipeline
tfidf = TfidfTransformer()
lr_clf = Pipeline([('vect', TfidfVectorizer(min_df= 5, sublinear_tf=True, norm='l2', ngram_range=(1, 4))),
('tfidf', TfidfTransformer()),
('chi', SelectKBest(chi2, k=20000)),
('clf', LogisticRegression())])
lr_clf = lr_clf.fit(X_train, y_train)
# Performance of NB Classifier
import numpy as np
predicted = lr_clf.predict(X_test)
print(f"------------------\n{np.mean(predicted == y_test)*100}\n------------------")
from sklearn.svm import SVC
svm = SVC()
tfidf = TfidfVectorizer()
svm_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SVC(C=1.0, kernel='linear', degree=3, gamma='auto'))])
svm = svm_clf.fit(X_train, y_train)
# Performance of NB Classifier
import numpy as np
predicted = svm_clf.predict(X_test)
print(f"------------------\n{np.mean(predicted == y_test)*100}\n------------------")
| ------------------
81.52206100088836
------------------
| MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Tokenization | X_train.shape, y_train.shape
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import LSTM, Conv1D, MaxPooling1D, Dropout
MAX_NB_WORDS = 20000
# get the raw text data
X_train = X_train.astype(str)
X_test = X_test.astype(str)
# finally, vectorize the text samples into a 2D integer tensor
tokenizer = Tokenizer(nb_words=MAX_NB_WORDS, char_level=False)
tokenizer.fit_on_texts(X_train)
sequences = tokenizer.texts_to_sequences(X_train)
sequences_test = tokenizer.texts_to_sequences(X_test)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
sequences[0] | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
The tokenizer object stores a mapping (vocabulary) from word strings to token ids that can be inverted to reconstruct the original message (without formatting): | type(tokenizer.word_index), len(tokenizer.word_index)
index_to_word = dict((i, w) for w, i in tokenizer.word_index.items())
" ".join([index_to_word[i] for i in sequences[0]]) | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Let's have a closer look at the tokenized sequences: | seq_lens = [len(s) for s in sequences]
print("average length: %0.1f" % np.mean(seq_lens))
print("max length: %d" % max(seq_lens))
%matplotlib inline
plt.hist(seq_lens, bins=50);
plt.hist([l for l in seq_lens if l < 30], bins=2);
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape) | (54027,)
(54027,)
(13507,)
(13507,)
| MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
SDGClassifier | # Training Support Vector Machines - SVM and calculating its performance
from sklearn.linear_model import SGDClassifier
text_clf_svm = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()),
('clf-svm', SGDClassifier(loss='hinge', penalty='l2',alpha=1e-9, max_iter=3, shuffle=True, random_state=0))])
text_clf_svm = text_clf_svm.fit(X_train, y_train)
predicted_svm = text_clf_svm.predict(X_test)
print(f"------------------\n{np.mean(predicted_svm == y_test)*100}\n------------------") | /usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_stochastic_gradient.py:557: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit.
ConvergenceWarning)
| MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
MultinomialNB | # Extracting features from text files
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
# TF-IDF
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
# Machine Learning
# Training Naive Bayes (NB) classifier on training data.
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tfidf, y_train)
# Building a pipeline: We can write less code and do all of the above, by building a pipeline as follows:
# The names βvectβ , βtfidfβ and βclfβ are arbitrary but will be used later.
# We will be using the 'text_clf' going forward.
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB())])
text_clf = text_clf.fit(X_train, y_train)
# Performance of NB Classifier
import numpy as np
predicted = text_clf.predict(X_test)
np.mean(predicted == y_test)*100 | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
SDG Classifier | # Training Support Vector Machines - SVM and calculating its performance
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer
text_clf_svm = Pipeline([('vect', CountVectorizer()),
('clf-svm', SGDClassifier(loss='hinge', penalty='l2',alpha=1e-3, max_iter=5, random_state=42))])
text_clf_svm = text_clf_svm.fit(X_train, y_train)
predicted_svm = text_clf_svm.predict(X_test)
np.mean(predicted_svm == y_test) | /usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_stochastic_gradient.py:557: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit.
ConvergenceWarning)
| MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Submission | sub = pd.read_csv("SampleSubmission.csv")
submission = pd.DataFrame()
submission['ID'] = test['ID']
submission.head()
submission.shape
pred = lr_clf.predict(test['text'])
pred
len(pred) | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
The index is still there, so we will set the column ID as the dataframe index. | submission['label'] = pred
submission.set_index('ID', inplace=True)
submission.head() | _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
We have successfully replaced the index with the column ID.Now Let us create our submission file. | submission.to_csv("lr_submission.csv")
| _____no_output_____ | MIT | Zindi_Sentiment_Analysis_Tunisian_Arabizi.ipynb | Paul-mwaura/Zindi-Sentiment-Analysis_Tunisian-Arabizi |
Imports | import numpy as np
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
plt.style.use('seaborn-white') | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Read and process data. Download the file from this URL: https://drive.google.com/file/d/1UWWIi-sz9g0x3LFvkIZjvK1r2ZaCqgGS/view?usp=sharing | import gdown
gdown.download('https://drive.google.com/uc?id=1UWWIi-sz9g0x3LFvkIZjvK1r2ZaCqgGS','text.txt', quiet=False)
data = open('text.txt', 'r').read() | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Process data and calculate indices | chars = list(set(data))
data_size, X_size = len(data), len(chars)
print("Corona Virus article has %d characters, %d unique characters" %(data_size, X_size))
char_to_idx = {ch:i for i,ch in enumerate(chars)}
idx_to_char = {i:ch for i,ch in enumerate(chars)} | Corona Virus article has 10223 characters, 75 unique characters
| MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Constants and Hyperparameters | Hidden_Layer_size = 100 #size of the hidden layer
Time_steps = 40 # Number of time steps (length of the sequence) used for training
learning_rate = 1e-1 # Learning Rate
weight_sd = 0.1 #Standard deviation of weights for initialization
z_size = Hidden_Layer_size + X_size #Size of concatenation(H, X) vector | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Activation Functions and Derivatives | def sigmoid(x): # sigmoid function
return 1/(1+np.exp(-x))
def dsigmoid(y): # derivative of sigmoid function
return y * (1-y)
def tanh(x): # tanh function
return np.tanh(x)
def dtanh(y): # derivative of tanh
return 1-y*y | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Quiz Question 1What is the value of sigmoid(0) calculated from your code? (Answer up to 1 decimal point, e.g. 4.2 and NOT 4.29999999, no rounding off). Quiz Question 2What is the value of dsigmoid(sigmoid(0)) calculated from your code?? (Answer up to 2 decimal point, e.g. 4.29 and NOT 4.29999999, no rounding off). Quiz Question 3What is the value of tanh(dsigmoid(sigmoid(0))) calculated from your code?? (Answer up to 5 decimal point, e.g. 4.29999 and NOT 4.29999999, no rounding off). Quiz Question 4What is the value of dtanh(tanh(dsigmoid(sigmoid(0)))) calculated from your code?? (Answer up to 5 decimal point, e.g. 4.29999 and NOT 4.29999999, no rounding off). | print('Quiz 1', sigmoid(0))
print('Quiz 2', dsigmoid(sigmoid(0)))
print('Quiz 3', tanh(dsigmoid(sigmoid(0))))
print('Quiz 4', dtanh(tanh(dsigmoid(sigmoid(0))))) | Quiz 1 0.5
Quiz 2 0.25
Quiz 3 0.24491866240370913
Quiz 4 0.940014848806378
| MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Parameters | class Param:
def __init__(self, name, value):
self.name = name
self.v = value # parameter value
self.d = np.zeros_like(value) # derivative
self.m = np.zeros_like(value) # momentum for Adagrad | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
We use random weights with normal distribution (0, weight_sd) for tanh activation function and (0.5, weight_sd) for `sigmoid` activation function.Biases are initialized to zeros. LSTM You are making this network, please note f, i, c and o (also "v") in the image below:Please note that we are concatenating the old_hidden_vector and new_input. Quiz Question 4In the class definition below, what should be size_a, size_b, and size_c? ONLY use the variables defined above. | size_a = Hidden_Layer_size
size_b = z_size
size_c = X_size
class Parameters:
def __init__(self):
self.W_f = Param('W_f', np.random.randn(size_a, size_b) * weight_sd + 0.5)
self.b_f = Param('b_f', np.zeros((size_a, 1)))
self.W_i = Param('W_i', np.random.randn(size_a, size_b) * weight_sd + 0.5)
self.b_i = Param('b_i', np.zeros((size_a, 1)))
self.W_C = Param('W_C', np.random.randn(size_a, size_b) * weight_sd)
self.b_C = Param('b_C', np.zeros((size_a, 1)))
self.W_o = Param('W_o', np.random.randn(size_a, size_b) * weight_sd + 0.5)
self.b_o = Param('b_o', np.zeros((size_a, 1)))
#For final layer to predict the next character
self.W_v = Param('W_v', np.random.randn(X_size, size_a) * weight_sd)
self.b_v = Param('b_v', np.zeros((size_c, 1)))
def all(self):
return [self.W_f, self.W_i, self.W_C, self.W_o, self.W_v,
self.b_f, self.b_i, self.b_C, self.b_o, self.b_v]
parameters = Parameters() | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Look at these operations which we'll be writing:**Concatenation of h and x:**$z\:=\:\left[h_{t-1},\:x\right]$$f_t=\sigma\left(W_f\cdot z\:+\:b_f\:\right)$$i_i=\sigma\left(W_i\cdot z\:+\:b_i\right)$$\overline{C_t}=\tanh\left(W_C\cdot z\:+\:b_C\right)$$C_t=f_t\ast C_{t-1}+i_t\ast \overline{C}_t$$o_t=\sigma\left(W_o\cdot z\:+\:b_i\right)$$h_t=o_t\ast\tanh\left(C_t\right)$**Logits:**$v_t=W_v\cdot h_t+b_v$**Softmax:**$\hat{y}=softmax\left(v_t\right)$ | def forward(x, h_prev, C_prev, p = parameters):
assert x.shape == (X_size, 1)
assert h_prev.shape == (Hidden_Layer_size, 1)
assert C_prev.shape == (Hidden_Layer_size, 1)
z = np.row_stack((h_prev, x))
f = sigmoid(np.dot(parameters.all()[0].v, z)+ parameters.all()[5].v)
i = sigmoid(np.dot(parameters.all()[1].v, z)+ parameters.all()[6].v)
C_bar = tanh(np.dot(parameters.all()[2].v, z)+ parameters.all()[7].v)
C = f*C_prev + i*C_bar
o = sigmoid(np.dot(parameters.all()[3].v, z)+parameters.all()[8].v)
h = o*tanh(C)
v = np.dot(parameters.all()[4].v, h)+parameters.all()[9].v
y = np.exp(v) / np.sum(np.exp(v)) #softmax
return z, f, i, C_bar, C, o, h, v, y | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
You must finish the function above before you can attempt the questions below. Quiz Question 5What is the output of 'print(len(forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)), parameters)))'? | print(len(forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)), parameters))) | 9
| MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Quiz Question 6. Assuming you have fixed the forward function, run this command: z, f, i, C_bar, C, o, h, v, y = forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)))Now, find these values:1. print(z.shape)2. print(np.sum(z))3. print(np.sum(f))Copy and paste exact values you get in the logs into the quiz. | z, f, i, C_bar, C, o, h, v, y = forward(np.zeros((X_size, 1)), np.zeros((Hidden_Layer_size, 1)), np.zeros((Hidden_Layer_size, 1)))
print(z.shape)
print(np.sum(z))
print(np.sum(f)) | (175, 1)
0.0
50.0
| MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
BackpropagationHere we are defining the backpropagation. It's too complicated, here is the whole code. (Please note that this would work only if your earlier code is perfect). | def backward(target, dh_next, dC_next, C_prev,
z, f, i, C_bar, C, o, h, v, y,
p = parameters):
assert z.shape == (X_size + Hidden_Layer_size, 1)
assert v.shape == (X_size, 1)
assert y.shape == (X_size, 1)
for param in [dh_next, dC_next, C_prev, f, i, C_bar, C, o, h]:
assert param.shape == (Hidden_Layer_size, 1)
dv = np.copy(y)
dv[target] -= 1
p.W_v.d += np.dot(dv, h.T)
p.b_v.d += dv
dh = np.dot(p.W_v.v.T, dv)
dh += dh_next
do = dh * tanh(C)
do = dsigmoid(o) * do
p.W_o.d += np.dot(do, z.T)
p.b_o.d += do
dC = np.copy(dC_next)
dC += dh * o * dtanh(tanh(C))
dC_bar = dC * i
dC_bar = dtanh(C_bar) * dC_bar
p.W_C.d += np.dot(dC_bar, z.T)
p.b_C.d += dC_bar
di = dC * C_bar
di = dsigmoid(i) * di
p.W_i.d += np.dot(di, z.T)
p.b_i.d += di
df = dC * C_prev
df = dsigmoid(f) * df
p.W_f.d += np.dot(df, z.T)
p.b_f.d += df
dz = (np.dot(p.W_f.v.T, df)
+ np.dot(p.W_i.v.T, di)
+ np.dot(p.W_C.v.T, dC_bar)
+ np.dot(p.W_o.v.T, do))
dh_prev = dz[:Hidden_Layer_size, :]
dC_prev = f * dC
return dh_prev, dC_prev | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Forward and Backward Combined PassLet's first clear the gradients before each backward pass | def clear_gradients(params = parameters):
for p in params.all():
p.d.fill(0) | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Clip gradients to mitigate exploding gradients | def clip_gradients(params = parameters):
for p in params.all():
np.clip(p.d, -1, 1, out=p.d) | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Calculate and store the values in forward pass. Accumulate gradients in backward pass and clip gradients to avoid exploding gradients.input, target are list of integers, with character indexes.h_prev is the array of initial h at hβ1 (size H x 1)C_prev is the array of initial C at Cβ1 (size H x 1)Returns loss, final hT and CT | def forward_backward(inputs, targets, h_prev, C_prev):
global paramters
# To store the values for each time step
x_s, z_s, f_s, i_s, = {}, {}, {}, {}
C_bar_s, C_s, o_s, h_s = {}, {}, {}, {}
v_s, y_s = {}, {}
# Values at t - 1
h_s[-1] = np.copy(h_prev)
C_s[-1] = np.copy(C_prev)
loss = 0
# Loop through time steps
assert len(inputs) == Time_steps
for t in range(len(inputs)):
x_s[t] = np.zeros((X_size, 1))
x_s[t][inputs[t]] = 1 # Input character
(z_s[t], f_s[t], i_s[t],
C_bar_s[t], C_s[t], o_s[t], h_s[t],
v_s[t], y_s[t]) = \
forward(x_s[t], h_s[t - 1], C_s[t - 1]) # Forward pass
loss += -np.log(y_s[t][targets[t], 0]) # Loss for at t
clear_gradients()
dh_next = np.zeros_like(h_s[0]) #dh from the next character
dC_next = np.zeros_like(C_s[0]) #dh from the next character
for t in reversed(range(len(inputs))):
# Backward pass
dh_next, dC_next = \
backward(target = targets[t], dh_next = dh_next,
dC_next = dC_next, C_prev = C_s[t-1],
z = z_s[t], f = f_s[t], i = i_s[t], C_bar = C_bar_s[t],
C = C_s[t], o = o_s[t], h = h_s[t], v = v_s[t],
y = y_s[t])
clip_gradients()
return loss, h_s[len(inputs) - 1], C_s[len(inputs) - 1] | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Sample the next character | def sample(h_prev, C_prev, first_char_idx, sentence_length):
x = np.zeros((X_size, 1))
x[first_char_idx] = 1
h = h_prev
C = C_prev
indexes = []
for t in range(sentence_length):
_, _, _, _, C, _, h, _, p = forward(x, h, C)
idx = np.random.choice(range(X_size), p=p.ravel())
x = np.zeros((X_size, 1))
x[idx] = 1
indexes.append(idx)
return indexes | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Training (Adagrad)Update the graph and display a sample output | def update_status(inputs, h_prev, C_prev):
#initialized later
global plot_iter, plot_loss
global smooth_loss
# Get predictions for 200 letters with current model
sample_idx = sample(h_prev, C_prev, inputs[0], 200)
txt = ''.join(idx_to_char[idx] for idx in sample_idx)
# Clear and plot
plt.plot(plot_iter, plot_loss)
display.clear_output(wait=True)
plt.show()
#Print prediction and loss
print("----\n %s \n----" % (txt, ))
print("iter %d, loss %f" % (iteration, smooth_loss)) | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Update Parameters\begin{align}\theta_i &= \theta_i - \eta\frac{d\theta_i}{\sum dw_{\tau}^2} \\d\theta_i &= \frac{\partial L}{\partial \theta_i}\end{align} | def update_paramters(params = parameters):
for p in params.all():
p.m += p.d * p.d # Calculate sum of gradients
#print(learning_rate * dparam)
p.v += -(learning_rate * p.d / np.sqrt(p.m + 1e-8)) | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
To delay the keyboard interrupt to prevent the training from stopping in the middle of an iteration | # Exponential average of loss
# Initialize to a error of a random model
smooth_loss = -np.log(1.0 / X_size) * Time_steps
iteration, pointer = 0, 0
# For the graph
plot_iter = np.zeros((0))
plot_loss = np.zeros((0)) | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Training Loop | iter = 50000
while iter > 0:
# Reset
if pointer + Time_steps >= len(data) or iteration == 0:
g_h_prev = np.zeros((Hidden_Layer_size, 1))
g_C_prev = np.zeros((Hidden_Layer_size, 1))
pointer = 0
inputs = ([char_to_idx[ch]
for ch in data[pointer: pointer + Time_steps]])
targets = ([char_to_idx[ch]
for ch in data[pointer + 1: pointer + Time_steps + 1]])
loss, g_h_prev, g_C_prev = \
forward_backward(inputs, targets, g_h_prev, g_C_prev)
smooth_loss = smooth_loss * 0.999 + loss * 0.001
# Print every hundred steps
if iteration % 100 == 0:
update_status(inputs, g_h_prev, g_C_prev)
update_paramters()
plot_iter = np.append(plot_iter, [iteration])
plot_loss = np.append(plot_loss, [loss])
pointer += Time_steps
iteration += 1
iter = iter -1 | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Quiz Question 7. Run the above code for 50000 iterations making sure that you have 100 hidden layers and time_steps is 40. What is the loss value you're seeing? | iter = 50000
while iter > 0:
# Reset
if pointer + Time_steps >= len(data) or iteration == 0:
g_h_prev = np.zeros((Hidden_Layer_size, 1))
g_C_prev = np.zeros((Hidden_Layer_size, 1))
pointer = 0
inputs = ([char_to_idx[ch]
for ch in data[pointer: pointer + Time_steps]])
targets = ([char_to_idx[ch]
for ch in data[pointer + 1: pointer + Time_steps + 1]])
loss, g_h_prev, g_C_prev = \
forward_backward(inputs, targets, g_h_prev, g_C_prev)
smooth_loss = smooth_loss * 0.999 + loss * 0.001
# Print every hundred steps
if iteration % 100 == 0:
update_status(inputs, g_h_prev, g_C_prev)
update_paramters()
plot_iter = np.append(plot_iter, [iteration])
plot_loss = np.append(plot_loss, [loss])
pointer += Time_steps
iteration += 1
iter = iter -1 | _____no_output_____ | MIT | S10/EVA P2S3_Q7.ipynb | pankaj90382/TSAI-2 |
Triangle MeshesAlong with [points](2_Points.ipynb), [timeseries](3_Timeseries.ipynb), [trajectories](4_Trajectories.ipynb), and structured [grids](5_Grids.ipynb), Datashader can rasterize large triangular meshes, such as those often used to simulate data on an irregular grid:Any polygon can be represented as a set of triangles, and any shape can be approximated by a polygon, so the triangular-mesh support has many potential uses. In each case, the triangular mesh represents (part of) a *surface*, not a volume, and so the result fits directly into a 2D plane rather than requiring 3D rendering. This process of rasterizing a triangular mesh means generating values along specified regularly spaced intervals in the plane. These examples from the [Direct3D docs](https://msdn.microsoft.com/en-us/library/windows/desktop/cc627092.aspx) show how this process works, for a variety of edge cases:This diagram uses "pixels" and colors (grayscale), but for datashader the generated raster is more precisely interpreted as a 2D array with bins, not pixels, because the values involved are numeric rather than colors. (With datashader, colors are assigned only in the later "shading" stage, not during rasterization itself.) As shown in the diagram, a pixel (bin) is treated as belonging to a given triangle if its center falls either inside that triangle or along its top or left edge.The specific algorithm used to do so is based on the approach of [Pineda (1998)](http://people.csail.mit.edu/ericchan/bib/pdf/p17-pineda.pdf), which has the following features: * Classification of pixels relies on triangle convexity * Embarrassingly parallel linear calculations * Inner loop can be calculated incrementally, i.e. with very "cheap" computations and a few assumptions: * Triangles should be non overlapping (to ensure repeatable results for different numbers of cores) * Triangles should be specified consistently either in clockwise or in counterclockwise order of vertices (winding). Trimesh rasterization is not yet GPU-accelerated, but it's fast because of [Numba](http://numba.pydata.org) compiling Python into SIMD machine code instructions. Tiny exampleTo start with, let's generate a tiny set of 10 vertices at random locations: | import numpy as np, datashader as ds, pandas as pd
import datashader.utils as du, datashader.transfer_functions as tf
from scipy.spatial import Delaunay
import dask.dataframe as dd
n = 10
np.random.seed(2)
x = np.random.uniform(size=n)
y = np.random.uniform(size=n)
z = np.random.uniform(0,1.0,x.shape)
pts = np.stack((x,y,z)).T
verts = pd.DataFrame(np.stack((x,y,z)).T, columns=['x', 'y' , 'z']) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
Here we have a set of random x,y locations and associated z values. We can see the numeric values with "head" and plot them (with color for z) using datashader's usual points plotting: | cvs = ds.Canvas(plot_height=400,plot_width=400)
tf.Images(verts.head(15), tf.spread(tf.shade(cvs.points(verts, 'x', 'y', agg=ds.mean('z')), name='Points'))) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
To make a trimesh, we need to connect these points together into a non-overlapping set of triangles. One well-established way of doing so is [Delaunay triangulation](https://en.wikipedia.org/wiki/Delaunay_triangulation): | def triangulate(vertices, x="x", y="y"):
"""
Generate a triangular mesh for the given x,y,z vertices, using Delaunay triangulation.
For large n, typically results in about double the number of triangles as vertices.
"""
triang = Delaunay(vertices[[x,y]].values)
print('Given', len(vertices), "vertices, created", len(triang.simplices), 'triangles.')
return pd.DataFrame(triang.simplices, columns=['v0', 'v1', 'v2'])
%time tris = triangulate(verts) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
The result of triangulation is a set of triangles, each composed of three indexes into the vertices array. The triangle data can then be visualized by datashader's ``trimesh()`` method: | tf.Images(tris.head(15), tf.shade(cvs.trimesh(verts, tris))) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
By default, datashader will rasterize your trimesh using z values [linearly interpolated between the z values that are specified at the vertices](https://en.wikipedia.org/wiki/Barycentric_coordinate_systemInterpolation_on_a_triangular_unstructured_grid). The shading will then show these z values as colors, as above. You can enable or disable interpolation as you wish: | from colorcet import rainbow as c
tf.Images(tf.shade(cvs.trimesh(verts, tris, interpolate='nearest'), cmap=c, name='10 Vertices'),
tf.shade(cvs.trimesh(verts, tris, interpolate='linear'), cmap=c, name='10 Vertices Interpolated')) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
More complex exampleThe small example above should demonstrate how triangle-mesh rasterization works, but in practice datashader is intended for much larger datasets. Let's consider a sine-based function `f` whose frequency varies with radius: | rad = 0.05,1.0
def f(x,y):
rsq = x**2+y**2
return np.where(np.logical_or(rsq<rad[0],rsq>rad[1]), np.nan, np.sin(10/rsq)) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
We can easily visualize this function by sampling it on a raster with a regular grid: | n = 400
ls = np.linspace(-1.0, 1.0, n)
x,y = np.meshgrid(ls, ls)
img = f(x,y)
raster = tf.shade(tf.Image(img, name="Raster"))
raster | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
However, you can see pronounced aliasing towards the center of this function, as the frequency starts to exceed the sampling density of the raster. Instead of sampling at regularly spaced locations like this, let's try evaluating the function at random locations whose density varies towards the center: | def polar_dropoff(n, r_start=0.0, r_end=1.0):
ls = np.linspace(0, 1.0, n)
ex = np.exp(2-5*ls)/np.exp(2)
radius = r_start+(r_end-r_start)*ex
theta = np.random.uniform(0.0,1.0, n)*np.pi*2.0
x = radius * np.cos( theta )
y = radius * np.sin( theta )
return x,y
x,y = polar_dropoff(n*n, np.sqrt(rad[0]), np.sqrt(rad[1]))
z = f(x,y)
verts = pd.DataFrame(np.stack((x,y,z)).T, columns=['x', 'y' , 'z']) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
We can now plot the x,y points and optionally color them with the z value (the value of the function f(x,y)): | cvs = ds.Canvas(plot_height=400,plot_width=400)
tf.Images(tf.shade(cvs.points(verts, 'x', 'y'), name='Points'),
tf.shade(cvs.points(verts, 'x', 'y', agg=ds.mean('z')), name='PointsZ')) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
The points are clearly covering the area of the function that needs dense sampling, and the shape of the function can (roughly) be made out when the points are colored in the plot. But let's go ahead and triangulate so that we can interpolate between the sampled values for display: | %time tris = triangulate(verts) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
And let's pre-compute the combined mesh data structure for these vertices and triangles, which for very large meshes (much larger than this one!) would save plotting time later: | %time mesh = du.mesh(verts,tris) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
This mesh can be used for all future plots as long as we don't change the number or ordering of vertices or triangles, which saves time for much larger grids.We can now plot the trimesh to get an approximation of the function with noisy sampling locally to disrupt the interference patterns observed in the regular-grid version above and preserve fidelity where it is needed. (Usually one wouldn't do this just for the purposes of plotting a function, since the eventual display on a screen is a raster image no matter what, but having a variable grid is crucial if running a simulation where fine detail is needed only in certain regions.) | tf.shade(cvs.trimesh(verts, tris, mesh=mesh)) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
The fine detail in the heavily sampled regions is visible when zooming in closer (without resampling the function): | tf.Images(*([tf.shade(ds.Canvas(x_range=r, y_range=r).trimesh(verts, tris, mesh=mesh))
for r in [(0.1,0.8), (0.14,0.4), (0.15,0.2)]])) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
Notice that the central disk is being filled in above, even though the function is not defined in the center. That's a limitation of Delaunay triangulation, which will create convex regions covering the provided vertices. You can use other tools for creating triangulations that have holes, align along certain regions, have specified densities, etc., such as [MeshPy](https://mathema.tician.de/software/meshpy) (Python bindings for [Triangle](http://www.cs.cmu.edu/~quake/triangle.html)). Aggregation functionsLike other datashader methods, the ``trimesh()`` method accepts an ``agg`` argument (defaulting to ``mean()``) for a reduction function that determines how the values from multiple triangles will contribute to the value of a given pixel: | tf.Images(tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.mean('z')),name='mean'),
tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.max('z')), name='max'),
tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.min('z')), name='min')) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
The three plots above should be nearly identical, except near the center disk where individual pixels start to have contributions from a large number of triangles covering different portions of the function space. In this inner ring, ``mean`` reports the average value of the surface inside that pixel, ``max`` reports the maximum value of the surface (hence being darker values in this color scheme), and ``Min`` reports the minimum value contained in each pixel. The ``min`` and ``max`` reductions are useful when looking at a very large mesh, revealing details not currently visible. For instance, if a mesh has a deep but very narrow trough, it will still show up in the ``min`` plot regardless of your raster's resolution, while it might be missed on the ``mean`` plot. Other reduction functions are useful for making a mask of the meshed area (``any``), for showing how many triangles are present in a given pixel (``count``), and for reporting the diversity of values within each pixel (``std`` and ``var``): | tf.Images(tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.any('z')), name='any'),
tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.count()), name='count'),
tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.std('z')), name='std')).cols(3) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
Parallelizing trimesh aggregation with DaskThe trimesh aggregation process can be parallelized by providing `du.mesh` and `Canvas.trimesh` with partitioned Dask dataframes.**Note:** While the calls to `Canvas.trimesh` will be parallelized across the partitions of the Dask dataframe, the construction of the partitioned mesh using `du.mesh` is not currently parallelized. Furthermore, it currently requires loading the entire `verts` and `tris` dataframes into memory in order to construct the partitioned mesh. Because of these constraints, this approach is most useful for the repeated aggregation of large meshes that fit in memory on a single multicore machine. | verts_ddf = dd.from_pandas(verts, npartitions=4)
tris_ddf = dd.from_pandas(tris, npartitions=4)
mesh_ddf = du.mesh(verts_ddf, tris_ddf)
mesh_ddf
tf.shade(cvs.trimesh(verts_ddf, tris_ddf, mesh=mesh_ddf)) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
Interactive plotsBy their nature, fully exploring irregular grids needs to be interactive, because the resolution of the screen and the visual system are fixed. Trimesh renderings can be generated as above and then displayed interactively using the datashader support in [HoloViews](http://holoviews.org). | import holoviews as hv
from holoviews.operation.datashader import datashade
hv.extension("bokeh") | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
HoloViews is designed to make working with data easier, including support for large or small trimeshes. With HoloViews, you first declare a ``hv.Trimesh`` object, then you apply the ``datashade()`` (or just ``aggregate()``) operation if the data is large enough to require datashader. Notice that HoloViews expects the triangles and vertices in the *opposite* order as datashader's ``cvs.trimesh()``, because the vertices are optional for HoloViews: | wireframe = datashade(hv.TriMesh((tris,verts), label="Wireframe").edgepaths)
trimesh = datashade(hv.TriMesh((tris,hv.Points(verts, vdims='z')), label="TriMesh"), aggregator=ds.mean('z'))
(wireframe + trimesh).opts(width=400, height=400) | _____no_output_____ | BSD-3-Clause | examples/user_guide/6_Trimesh.ipynb | odidev/datashader |
Reformer Efficient Attention: Ungraded LabThe videos describe two 'reforms' made to the Transformer to make it more memory and compute efficient. The *Reversible Layers* reduce memory and *Locality Sensitive Hashing(LSH)* reduces the cost of the Dot Product attention for large input sizes. This ungraded lab will look more closely at LSH and how it is used in the Reformer model.Specifically, the notebook has 3 goals* review dot-product self attention for reference* examine LSH based self attention* extend our understanding and familiarity with Trax infrastructure Outline- [Part 1: Trax Efficient Attention classes](1)- [Part 2: Full Dot Product Self Attention](2) - [2.1 Description](2.1) - [2.1.1 our_softmax](2.1.1) - [2.2 our simple attend](2.2) - [2.3 Class OurSelfAttention](2.3)- [Part 3: Trax LSHSelfAttention](3) - [3.1 Description](3.1) - [3.2 our_hash_vectors](3.2) - [3.3 Sorting Buckets](3.3) - [3.4 Chunked dot product attention](3.4) - [3.5 OurLSHSelfAttention](3.5) Part 1.0 Trax Efficient Attention classesTrax is similar to other popular NN development platforms such as Keras (now integrated into Tensorflow) and Pytorch in that it uses 'layers' as a useful level of abstraction. Layers are often represented as *classes*. We're going to improve our understanding of Trax by locally extending the classes used in the attention layers. We will extend only the 'forward' functions and utilize the existing attention layers as parent classes. The original code can be found at [github:trax/layers/Research/Efficient_attention](https://github.com/google/trax/blob/v1.3.4/trax/layers/research/efficient_attention.py). This link references release 1.3.4 but note that this is under the 'research' directory as this is an area of active research. When accessing the code on Github for review on this assignment, be sure you select the 1.3.4 release tag, the master copy may have new changes.:Figure 1: Reference Tag 1.3.4 on githubWhile Trax uses classes liberally, we have not built many classes in the course so far. Let's spend a few moments reviewing the classes we will be using.Figure 2: Classes from Trax/layers/Research/Efficient_Attention.py that we will be utilizing. Starting on the right in the diagram below you see EfficientAttentionBase. The parent to this class is the base.layer which has the routines used by all layers. EfficientAttentionBase leaves many routines to be overridden by child classes - but it has an important feature in the *Forward* routine. It supports a `use_reference_code` capability that selects implementations that limit some of the complexities to provide a more easily understood version of the algorithms. In particular, it implements a nested loop that treats each *'example, head'* independently. This simplifies our work as we need only worry about matrix operations on one *'example, head'* at a time. This loop calls *forward_unbatched*, which is the child process that we will be overriding.On the top left are the outlines of the two child classes we will be using. The SelfAttention layer is a 'traditional' implementation of the dot product attention. We will be implementing the *forward_unbatched* version of this to highlight the differences between this and the LSH implementation.Below that is the LSHSelfAttention. This is the routine used in the Reformer architecture. We will override the *forward_unbatched* section of this and some of the utility functions it uses to explore its implementation in more detail.The code we will be working with is from the Trax source, and as such has implementation details that will make it a bit harder to follow. However, it will allow use of the results along with the rest of the Trax infrastructure. I will try to briefly describe these as they arise. The [Trax documentation](https://trax-ml.readthedocs.io/en/latest/) can also be referenced. Part 1.2 Trax DetailsThe goal in this notebook is to override a few routines in the Trax classes with our own versions. To maintain their functionality in a full Trax environment, many of the details we might ignore in example version of routines will be maintained in this code. Here are some of the considerations that may impact our code:* Trax operates with multiple back-end libraries, we will see special cases that will utilize unique features.* 'Fancy' numpy indexing is not supported in all backend environments and must be emulated in other ways.* Some operations don't have gradients for backprop and must be ignored or include forced re-evaluation.Here are some of the functions we may see:* Abstracted as `fastmath`, Trax supports multiple backend's such as [Jax](https://github.com/google/jax) and [Tensorflow2](https://github.com/tensorflow/tensorflow)* [tie_in](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.tie_in.html): Some non-numeric operations must be invoked during backpropagation. Normally, the gradient compute graph would determine invocation but these functions are not included. To force re-evaluation, they are 'tied' to other numeric operations using tie_in.* [stop_gradient](https://trax-ml.readthedocs.io/en/latest/trax.fastmath.html): Some operations are intentionally excluded from backprop gradient calculations by setting their gradients to zero.* Below we will execute `from trax.fastmath import numpy as np `, this uses accelerated forms of numpy functions. This is, however a *subset* of numpy | import os
import trax
from trax import layers as tl # core building block
import jax
from trax import fastmath # uses jax, offers numpy on steroids
# fastmath.use_backend('tensorflow-numpy')
import functools
from trax.fastmath import numpy as np # note, using fastmath subset of numpy!
from trax.layers import (
tie_in,
length_normalized,
apply_broadcasted_dropout,
look_adjacent,
permute_via_gather,
permute_via_sort,
) | INFO:tensorflow:tokens_length=568 inputs_length=512 targets_length=114 noise_density=0.15 mean_noise_span_length=3.0
| MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
Part 2 Full Dot-Product Self Attention Part 2.1 DescriptionFigure 3: Project datapath and primary data structures and where they are implementedThe diagram above shows many of the familiar data structures and operations related to attention and describes the routines in which they are implemented. We will start by working on *our_simple_attend* or our simpler version of the original *attend* function. We will review the steps in performing dot-product attention with more focus on the details of the operations and their significance. This is useful when comparing to LSH attention. Note we will be discussing a single example/head unless otherwise specified.Figure 4: dot-product of Query and KeyThe *attend* function receives *Query* and *Key*. As a reminder, they are produced by a matrix multiply of all the inputs with a single set of weights. We will describe the inputs as *embeddings* assuming an NLP application, however, this is not required. This matrix multiply very much like a convolutional network where a set of weights (a filter) slide across the input vectors leaving behind a map of the similarity of the input to the filter. In this case, the filters are the weight matrices $W^Q$ and $W^K$. The resulting maps are Q and K. Q and K have the dimensions of (n_seq, n_q) where n_seq is the number input embeddings and n_q or n_k is the selected size of the Q or K vectors. Note the shading of Q and K, this reflects the fact that each entry is associated with a particular input embedding. You will note later in the code that K is optional. Apparently, similar results can be achieved using Query alone saving the compute and storage associated with K. In that case, the dot-product in *attend* is matmul(q,q). Note the resulting dot-product (*Dot*) entries describe a complete (n_seq,n_seq) map of the similarity of all entries of q vs all entries of k. This is reflected in the notation in the dot-product boxes of $w_n$,$w_m$ representing word_n, word_m. Note that each row of *Dot* describes the relationship of an input embedding, say $w_0$, with every other input. In some applications some values are masked. This can be used, for example to exclude results that occur later in time (causal) or to mask padding or other inputs.Figure 5: MaskingThe routine below *mask_self_attention* implements a flexible masking capability. The masking is controlled by the information in q_info and kv_info. | def mask_self_attention(
dots, q_info, kv_info, causal=True, exclude_self=True, masked=False
):
"""Performs masking for self-attention."""
if causal:
mask = fastmath.lt(q_info, kv_info).astype(np.float32)
dots = dots - 1e9 * mask
if exclude_self:
mask = np.equal(q_info, kv_info).astype(np.float32)
dots = dots - 1e5 * mask
if masked:
zeros_like_kv_info = tie_in(kv_info, np.zeros_like(kv_info))
mask = fastmath.lt(kv_info, zeros_like_kv_info).astype(np.float32)
dots = dots - 1e9 * mask
return dots | _____no_output_____ | MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
A SoftMax is applied per row of the *Dot* matrix to scale the values in the row between 0 and 1.Figure 6: SoftMax per row of Dot Part 2.1.1 our_softmax This code uses a separable form of the softmax calculation. Recall the softmax:$$ softmax(x_i)=\frac{\exp(x_i)}{\sum_j \exp(x_j)}\tag{1}$$This can be alternately implemented as:$$ logsumexp(x)=\log{({\sum_j \exp(x_j)})}\tag{2}$$$$ softmax(x_i)=\exp({x_i - logsumexp(x)})\tag{3}$$The work below will maintain a copy of the logsumexp allowing the softmax to be completed in sections. You will see how this is useful later in the LSHSelfAttention class.We'll create a routine to implement that here with the addition of a passthrough. The matrix operations we will be working on below are easier to follow if we can maintain integer values. So, for tests, we will skip the softmax in some cases. | def our_softmax(x, passthrough=False):
""" softmax with passthrough"""
logsumexp = fastmath.logsumexp(x, axis=-1, keepdims=True)
o = np.exp(x - logsumexp)
if passthrough:
return (x, np.zeros_like(logsumexp))
else:
return (o, logsumexp) | _____no_output_____ | MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
Let's check our implementation. | ## compare softmax(a) using both methods
a = np.array([1.0, 2.0, 3.0, 4.0])
sma = np.exp(a) / sum(np.exp(a))
print(sma)
sma2, a_logsumexp = our_softmax(a)
print(sma2)
print(a_logsumexp) | [0.0320586 0.08714432 0.2368828 0.6439142 ]
[0.0320586 0.08714431 0.23688279 0.64391416]
[4.44019]
| MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
The purpose of the dot-product is to 'focus attention' on some of the inputs. Dot now has entries appropriately scaled to enhance some values and reduce others. These are now applied to the $V$ entries.Figure 7: Applying Attention to $V$$V$ is of size (n_seq,n_v). Note the shading in the diagram. This is to draw attention to the operation of the matrix multiplication. This is detailed below.Figure 7: The Matrix Multiply applies attention to the values of V$V$ is formed by a matrix multiply of the input embedding with the weight matrix $W^v$ whose values were set by backpropagation. The row entries of $V$ are then related to the corresponding input embedding. The matrix multiply weights first column of V, representing a section of each of the input embeddings, with the first row of Dot, representing the similarity of $W_0$ and each word of the input embedding and deposits the value in $Z$ Part 2.2 our_simple_attendIn this section we'll work on an implementation of *attend* whose operations you can see in figure 3. It is a slightly simplified version of the routine in [efficient_attention.py](https://github.com/google/trax/blob/v1.3.4/trax/layers/research/efficient_attention.py). We will fill in a few lines of code. The main goal is to become familiar with the routine. You have implemented similar functionality in a previous assignment.**Instructions****Step 1:** matrix multiply (np.matmul) q and the k 'transpose' kr.**Step 2:** use our_softmax() to perform a softmax on masked output of the dot product, dots.**Step 3:** matrix multiply (np.matmul) dots and v. | def our_simple_attend(
q,
k=None,
v=None,
mask_fn=None,
q_info=None,
kv_info=None,
dropout=0.0,
rng=None,
verbose=False,
passthrough=False,
):
"""Dot-product attention, with masking, without optional chunking and/or.
Args:
q: Query vectors, shape [q_len, d_qk]
k: Key vectors, shape [kv_len, d_qk]; or None
v: Value vectors, shape [kv_len, d_v]
mask_fn: a function reference that implements masking (e.g. mask_self_attention)
q_info: Query-associated metadata for masking
kv_info: Key-associated metadata for masking
dropout: Dropout rate
rng: RNG for dropout
Returns:
A tuple (output, dots_logsumexp). The output has shape [q_len, d_v], and
dots_logsumexp has shape [q_len]. The logsumexp of the attention
probabilities is useful for combining multiple rounds of attention (as in
LSH attention).
"""
assert v is not None
share_qk = k is None
if share_qk:
k = q
if kv_info is None:
kv_info = q_info
if share_qk:
k = length_normalized(k)
k = k / np.sqrt(k.shape[-1])
# Dot-product attention.
kr = np.swapaxes(k, -1, -2) # note the fancy transpose for later..
## Step 1 ##
dots = np.matmul(q, kr)
if verbose:
print("Our attend dots", dots.shape)
# Masking
if mask_fn is not None:
dots = mask_fn(dots, q_info[..., :, None], kv_info[..., None, :])
# Softmax.
# dots_logsumexp = fastmath.logsumexp(dots, axis=-1, keepdims=True) #original
# dots = np.exp(dots - dots_logsumexp) #original
## Step 2 ##
# replace with our_softmax()
dots, dots_logsumexp = our_softmax(dots)
if verbose:
print("Our attend dots post softmax", dots.shape, dots_logsumexp.shape)
if dropout > 0.0:
assert rng is not None
# Dropout is broadcast across the bin dimension
dropout_shape = (dots.shape[-2], dots.shape[-1])
keep_prob = tie_in(dots, 1.0 - dropout)
keep = fastmath.random.bernoulli(rng, keep_prob, dropout_shape)
multiplier = keep.astype(dots.dtype) / tie_in(keep, keep_prob)
dots = dots * multiplier
## Step 3 ##
# The softmax normalizer (dots_logsumexp) is used by multi-round LSH attn.
out = np.matmul(dots, v)
if verbose:
print("Our attend out1", out.shape)
out = np.reshape(out, (-1, out.shape[-1]))
if verbose:
print("Our attend out2", out.shape)
dots_logsumexp = np.reshape(dots_logsumexp, (-1,))
return out, dots_logsumexp
seq_len = 8
emb_len = 5
d_qk = 3
d_v = 4
with fastmath.use_backend("jax"): # specify the backend for consistency
rng_attend = fastmath.random.get_prng(1)
q = k = jax.random.uniform(rng_attend, (seq_len, d_qk), dtype=np.float32)
v = jax.random.uniform(rng_attend, (seq_len, d_v), dtype=np.float32)
o, logits = our_simple_attend(
q,
k,
v,
mask_fn=None,
q_info=None,
kv_info=None,
dropout=0.0,
rng=rng_attend,
verbose=True,
)
print(o, "\n", logits) | Our attend dots (8, 8)
Our attend dots post softmax (8, 8) (8, 1)
Our attend out1 (8, 4)
Our attend out2 (8, 4)
[[0.5606324 0.7290605 0.5251243 0.47101074]
[0.5713517 0.71991956 0.5033342 0.46975708]
[0.5622886 0.7288458 0.52172124 0.46318397]
[0.5568317 0.72234154 0.542236 0.4699722 ]
[0.56504494 0.72274375 0.5204978 0.47231334]
[0.56175965 0.7216782 0.53293145 0.48003793]
[0.56753993 0.72232544 0.5141734 0.46625748]
[0.57100445 0.70785505 0.5325362 0.4590797 ]]
[2.6512175 2.1914332 2.6630518 2.7792363 2.4583826 2.5421977 2.4145055
2.5111294]
| MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
Expected Output **Expected Output**```Our attend dots (8, 8)Our attend dots post softmax (8, 8) (8, 1)Our attend out1 (8, 4)Our attend out2 (8, 4)[[0.5606324 0.7290605 0.5251243 0.47101074] [0.5713517 0.71991956 0.5033342 0.46975708] [0.5622886 0.7288458 0.52172124 0.46318397] [0.5568317 0.72234154 0.542236 0.4699722 ] [0.56504494 0.72274375 0.5204978 0.47231334] [0.56175965 0.7216782 0.53293145 0.48003793] [0.56753993 0.72232544 0.5141734 0.46625748] [0.57100445 0.70785505 0.5325362 0.4590797 ]] [2.6512175 2.1914332 2.6630518 2.7792363 2.4583826 2.5421977 2.4145055 2.5111294]``` completed code for reference This notebook is ungraded, so for reference, the completed code follows:```def our_simple_attend( q, k=None, v=None, mask_fn=None, q_info=None, kv_info=None, dropout=0.0, rng=None, verbose=False, passthrough=False ): """Dot-product attention, with masking, without optional chunking and/or. Args: q: Query vectors, shape [q_len, d_qk] k: Key vectors, shape [kv_len, d_qk]; or None v: Value vectors, shape [kv_len, d_v] mask_fn: a function reference that implements masking (e.g. mask_self_attention) q_info: Query-associated metadata for masking kv_info: Key-associated metadata for masking dropout: Dropout rate rng: RNG for dropout Returns: A tuple (output, dots_logsumexp). The output has shape [q_len, d_v], and dots_logsumexp has shape [q_len]. The logsumexp of the attention probabilities is useful for combining multiple rounds of attention (as in LSH attention). """ assert v is not None share_qk = (k is None) if share_qk: k = q if kv_info is None: kv_info = q_info if share_qk: k = length_normalized(k) k = k / np.sqrt(k.shape[-1]) Dot-product attention. kr = np.swapaxes(k, -1, -2) note the fancy transpose for later.. Step 1 dots = np.matmul(q, kr ) if verbose: print("Our attend dots", dots.shape) Masking if mask_fn is not None: dots = mask_fn(dots, q_info[..., :, None], kv_info[..., None, :]) Softmax. dots_logsumexp = fastmath.logsumexp(dots, axis=-1, keepdims=True) original dots = np.exp(dots - dots_logsumexp) original Step 2 replace with our_softmax() dots, dots_logsumexp = our_softmax(dots, passthrough=passthrough) if verbose: print("Our attend dots post softmax", dots.shape, dots_logsumexp.shape) if dropout > 0.0: assert rng is not None Dropout is broadcast across the bin dimension dropout_shape = (dots.shape[-2], dots.shape[-1]) keep_prob = tie_in(dots, 1.0 - dropout) keep = fastmath.random.bernoulli(rng, keep_prob, dropout_shape) multiplier = keep.astype(dots.dtype) / tie_in(keep, keep_prob) dots = dots * multiplier Step 3 The softmax normalizer (dots_logsumexp) is used by multi-round LSH attn. out = np.matmul(dots, v) if verbose: print("Our attend out1", out.shape) out = np.reshape(out, (-1, out.shape[-1])) if verbose: print("Our attend out2", out.shape) dots_logsumexp = np.reshape(dots_logsumexp, (-1,)) return out, dots_logsumexp``` Part 2.3 Class OurSelfAttentionHere we create our own self attention layer by creating a class `OurSelfAttention`. The parent class will be the tl.SelfAttention layer in Trax. We will only override the `forward_unbatched` routine.We're not asking you to modify anything in this routine. There are some comments to draw your attention to a few lines. | class OurSelfAttention(tl.SelfAttention):
"""Our self-attention. Just the Forward Function."""
def forward_unbatched(
self, x, mask=None, *, weights, state, rng, update_state, verbose=False
):
print("ourSelfAttention:forward_unbatched")
del update_state
attend_rng, output_rng = fastmath.random.split(rng)
if self.bias:
if self.share_qk:
w_q, w_v, w_o, b_q, b_v = weights
else:
w_q, w_k, w_v, w_o, b_q, b_k, b_v = weights
else:
if self.share_qk:
w_q, w_v, w_o = weights
else:
w_q, w_k, w_v, w_o = weights
print("x.shape,w_q.shape", x.shape, w_q.shape)
q = np.matmul(x, w_q)
k = None
if not self.share_qk:
k = np.matmul(x, w_k)
v = np.matmul(x, w_v)
if self.bias:
q = q + b_q
if not self.share_qk:
k = k + b_k
v = v + b_v
mask_fn = functools.partial(
mask_self_attention,
causal=self.causal,
exclude_self=self.share_qk,
masked=self.masked,
)
q_info = kv_info = tie_in(x, np.arange(q.shape[-2], dtype=np.int32))
assert (mask is not None) == self.masked
if self.masked:
# mask is a boolean array (True means "is valid token")
ones_like_mask = tie_in(x, np.ones_like(mask, dtype=np.int32))
kv_info = kv_info * np.where(mask, ones_like_mask, -ones_like_mask)
# Notice, we are callout our vesion of attend
o, _ = our_simple_attend(
q,
k,
v,
mask_fn=mask_fn,
q_info=q_info,
kv_info=kv_info,
dropout=self.attention_dropout,
rng=attend_rng,
verbose=True,
)
# Notice, wo weight matrix applied to output of attend in forward_unbatched
out = np.matmul(o, w_o)
out = apply_broadcasted_dropout(out, self.output_dropout, output_rng)
return out, state
causal = False
masked = False
mask = None
attention_dropout = 0.0
n_heads = 3
d_qk = 3
d_v = 4
seq_len = 8
emb_len = 5
batch_size = 1
osa = OurSelfAttention(
n_heads=n_heads,
d_qk=d_qk,
d_v=d_v,
causal=causal,
use_reference_code=True,
attention_dropout=attention_dropout,
mode="train",
)
rng_osa = fastmath.random.get_prng(1)
x = jax.random.uniform(
jax.random.PRNGKey(0), (batch_size, seq_len, emb_len), dtype=np.float32
)
_, _ = osa.init(tl.shapes.signature(x), rng=rng_osa)
osa(x) | ourSelfAttention:forward_unbatched
x.shape,w_q.shape (8, 5) (5, 3)
Our attend dots (8, 8)
Our attend dots post softmax (8, 8) (8, 1)
Our attend out1 (8, 4)
Our attend out2 (8, 4)
ourSelfAttention:forward_unbatched
x.shape,w_q.shape (8, 5) (5, 3)
Our attend dots (8, 8)
Our attend dots post softmax (8, 8) (8, 1)
Our attend out1 (8, 4)
Our attend out2 (8, 4)
ourSelfAttention:forward_unbatched
x.shape,w_q.shape (8, 5) (5, 3)
Our attend dots (8, 8)
Our attend dots post softmax (8, 8) (8, 1)
Our attend out1 (8, 4)
Our attend out2 (8, 4)
| MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
Expected Output **Expected Output**Notice a few things:* the w_q (and w_k) matrices are applied to each row or each embedding on the input. This is similar to the filter operation in convolution* forward_unbatched is called 3 times. This is because we have 3 heads in this example.```ourSelfAttention:forward_unbatchedx.shape,w_q.shape (8, 5) (5, 3)Our attend dots (8, 8)Our attend dots post softmax (8, 8) (8, 1)Our attend out1 (8, 4)Our attend out2 (8, 4)ourSelfAttention:forward_unbatchedx.shape,w_q.shape (8, 5) (5, 3)Our attend dots (8, 8)Our attend dots post softmax (8, 8) (8, 1)Our attend out1 (8, 4)Our attend out2 (8, 4)ourSelfAttention:forward_unbatchedx.shape,w_q.shape (8, 5) (5, 3)Our attend dots (8, 8)Our attend dots post softmax (8, 8) (8, 1)Our attend out1 (8, 4)Our attend out2 (8, 4)DeviceArray([[[ 6.70414209e-01, -1.04319841e-01, -5.33822298e-01, 1.92711830e-01, -4.54187393e-05], [ 6.64090097e-01, -1.01875424e-01, -5.35733163e-01, 1.88311756e-01, -6.30629063e-03], [ 6.73380017e-01, -1.06952369e-01, -5.31989932e-01, 1.90056816e-01, 1.30271912e-03], [ 6.84564888e-01, -1.13240272e-01, -5.50182462e-01, 1.95673436e-01, 5.47635555e-03], [ 6.81435883e-01, -1.11068964e-01, -5.32343209e-01, 1.91912338e-01, 5.69400191e-03], [ 6.80724978e-01, -1.08496904e-01, -5.34994125e-01, 1.96332246e-01, 5.89773059e-03], [ 6.80933356e-01, -1.14087075e-01, -5.18659890e-01, 1.90674081e-01, 1.14096403e-02], [ 6.80265009e-01, -1.09031796e-01, -5.38248718e-01, 1.94203183e-01, 4.23943996e-03]]], dtype=float32)``` Part 3.0 Trax LSHSelfAttention Part 3.1 DescriptionThe larger the matrix multiply in the previous section is, the more context can be taken into account when making the next decision. However, the self attention dot product grows as the size of the input squared. For example, if one wished to have an input size of 1024, that would result in $1024^2$ or over a million dot products for each head! As a result, there has been significant research related to reducing the compute requirements. One such approach is Locality Sensitive Hashing(LSH) Self Attention.You may recall, earlier in the course you utilized LSH to find similar tweets without resorting to calculating cosine similarity for each pair of embeddings. We will use a similar approach here. It may be best described with an example.Figure 9: Example of LSH Self Attention LSH Self attention uses Queries only, no Keys. Attention then generates a metric of the similarity of each value of Q relative to all the other values in Q. An earlier assignment demonstrated that values which hash to the same bucket are likely to be similar. Further, multiple random hashes can improve the chances of finding entries which are similar. This is the approach taken here, though the hash is implemented a bit differently. The values of Q are hashed into buckets using a randomly generated set of hash vectors. Multiple sets of hash vectors are used, generating multiple hash tables. In the figure above, we have 3 hash tables with 4 buckets in each table. Notionally, following the hash, the values of Q have been replicated 3 times and distributed to their appropriate bucket in each of the 3 tables. To find similarity then, one generates dot-products only between members of the buckets. The result of this operation provides information on which entries are similar. As the operation has been distributed over multiple hash tables, these results need to be combined to form a complete picture and this can be used to generate a reduced dot-product attention array. Its clear that because we do not do a compare of every value vs every other value, the size of *Dots* will be reduced.The challenge in this approach is getting it to operate efficiently. You may recall from the earlier assignments the buckets were lists of entries and had varying length. This will operate poorly on a vector processing machine such as a GPU or TPU. Ideally, operations are done in large blocks with uniform sizes. While it is straightforward to implement the hash algorithm this way, it is challenging to managed buckets and variable sized dot-products. This will be discussed further below. For now, we will examine and implement the hash function. Part 3.2 our_hash_vectors *our_hash_vectors*, is a reimplementation of Trax *hashvector*. It takes in an array of vectors, hashes the entries and returns and array assigning each input vector to n_hash buckets. Hashing is described as creating *random rotations*, see [Practical and Optimal LSH for Angular Distance](https://arxiv.org/pdf/1509.02897.pdf).Figure 10: Processing steps in our_hash_vectors Note, in the diagram, sizes relate to our expected input $Q$ while our_hash_vectors is written assuming a generic input vector **Instructions****Step 1**create an array of random normal vectors which will be our hash vectors. Each vector will be hashed into a hash table and into `rot_size//2` buckets. We use `rot_size//2` to reduce computation. Later in the routine we will form the negative rotations with a simple negation and concatenate to get a full `rot_size` number of rotations. * use fastmath.random.normal and create an array of random vectors of shape (vec.shape[-1],n_hashes, rot_size//2)**Step 2** In this step we simply do the matrix multiply. `jax` has an accelerated version of [einsum](https://numpy.org/doc/stable/reference/generated/numpy.einsum.html). Here we will utilize more conventional routines.**Step 2x** * 2a: np.reshape random_rotations into a 2 dimensional array ([-1, n_hashes * (rot_size // 2)]) * 2b: np.dot vecs and random_rotations forming our rotated_vecs * 2c: back to 3 dimension with np.reshape [-1, n_hashes, rot_size//2] * 2d: prepare for concatenating by swapping dimensions np.transpose (1, 0, 2)**Step 3** Here we concatenate our rotation vectors getting a fullrot_size number of buckets (note, n_buckets = rotsize) * use np.concatenate, [rotated_vecs, -rotated_vecs], axis=-1**Step 4** **This is the exciting step!** You have no doubt been wondering how we will turn these vectors into bucket indexes. By performing np.argmax over the rotations for a given entry, you get the index to the best match! We will use this as a bucket index. * np.argmax(...).astype(np.int32); be sure to use the correct axis!**Step 5** In this style of hashing, items which land in bucket 0 of hash table 0 are not necessarily similar to those landing in bucket 0 of hash table 1, so we keep them separate. We do this by offsetting the bucket numbers by 'n_buckets'.* add buckets and offsets and reshape into a one dimensional arrayThis will return a 1D array of size n_hashes * vec.shape[0]. | def our_hash_vectors(vecs, rng, n_buckets, n_hashes, mask=None, verbose=False):
"""
Args:
vecs: tensor of at least 2 dimension,
rng: random number generator
n_buckets: number of buckets in each hash table
n_hashes: the number of hash tables
mask: None indicating no mask or a 1D boolean array of length vecs.shape[0], containing the location of padding value
verbose: controls prints for debug
Returns:
A vector of size n_hashes * vecs.shape[0] containing the buckets associated with each input vector per hash table.
"""
# check for even, integer bucket sizes
assert isinstance(n_buckets, int) and n_buckets % 2 == 0
rng = fastmath.stop_gradient(tie_in(vecs, rng))
rot_size = n_buckets
### Start Code Here
### Step 1 ###
rotations_shape = (vecs.shape[-1], n_hashes, rot_size // 2)
random_rotations = fastmath.random.normal(rng, rotations_shape).astype(np.float32)
if verbose:
print("random.rotations.shape", random_rotations.shape)
### Step 2 ###
if fastmath.backend_name() == "jax":
rotated_vecs = np.einsum("tf,fhb->htb", vecs, random_rotations)
print("using jax")
else:
# Step 2a
random_rotations = np.reshape(random_rotations, ([-1, n_hashes * (rot_size // 2)]))
if verbose:
print("random_rotations reshaped", random_rotations.shape)
# Step 2b
rotated_vecs = np.dot(vecs, random_rotations)
if verbose:
print("rotated_vecs1", rotated_vecs.shape)
# Step 2c
rotated_vecs = np.reshape(rotated_vecs, [-1, n_hashes, rot_size//2])
if verbose:
print("rotated_vecs2", rotated_vecs.shape)
# Step 2d
rotated_vecs = np.transpose(rotated_vecs, (1, 0, 2))
if verbose:
print("rotated_vecs3", rotated_vecs.shape)
### Step 3 ###
rotated_vecs = np.concatenate([rotated_vecs, -rotated_vecs], axis=-1)
if verbose:
print("rotated_vecs.shape", rotated_vecs.shape)
### Step 4 ###
buckets = np.argmax(rotated_vecs, axis=-1).astype(np.int32)
if verbose:
print("buckets.shape", buckets.shape)
if verbose:
print("buckets", buckets)
if mask is not None:
n_buckets += 1 # Create an extra bucket for padding tokens only
buckets = np.where(mask[None, :], buckets, n_buckets - 1)
# buckets is now (n_hashes, seqlen). Next we add offsets so that
# bucket numbers from different hashing rounds don't overlap.
offsets = tie_in(buckets, np.arange(n_hashes, dtype=np.int32))
offsets = np.reshape(offsets * n_buckets, (-1, 1))
### Step 5 ###
buckets = np.reshape(buckets + offsets, (-1,))
if verbose:
print("buckets with offsets", buckets.shape, "\n", buckets)
### End Code Here
return buckets
# example code. Note for reference, the sizes in this example match the values in the diagram above.
ohv_q = np.ones((8, 5)) # (seq_len=8, n_q=5)
ohv_n_buckets = 4 # even number
ohv_n_hashes = 3
with fastmath.use_backend("tf"):
ohv_rng = fastmath.random.get_prng(1)
ohv = our_hash_vectors(
ohv_q, ohv_rng, ohv_n_buckets, ohv_n_hashes, mask=None, verbose=True
)
print("ohv shape", ohv.shape, "\nohv", ohv) # (ohv_n_hashes * ohv_n_buckets)
# note the random number generators do not produce the same results with different backends
with fastmath.use_backend("jax"):
ohv_rng = fastmath.random.get_prng(1)
ohv = our_hash_vectors(ohv_q, ohv_rng, ohv_n_buckets, ohv_n_hashes, mask=None)
print("ohv shape", ohv.shape, "\nohv", ohv) # (ohv_n_hashes * ohv_n_buckets) | random.rotations.shape (5, 3, 2)
random_rotations reshaped (5, 6)
rotated_vecs1 (8, 6)
rotated_vecs2 (8, 3, 2)
rotated_vecs3 (3, 8, 2)
rotated_vecs.shape (3, 8, 4)
buckets.shape (3, 8)
buckets ndarray<tf.Tensor(
[[3 3 3 3 3 3 3 3]
[3 3 3 3 3 3 3 3]
[3 3 3 3 3 3 3 3]], shape=(3, 8), dtype=int32)>
buckets with offsets (24,)
ndarray<tf.Tensor([ 3 3 3 3 3 3 3 3 7 7 7 7 7 7 7 7 11 11 11 11 11 11 11 11], shape=(24,), dtype=int32)>
ohv shape (24,)
ohv ndarray<tf.Tensor([ 3 3 3 3 3 3 3 3 7 7 7 7 7 7 7 7 11 11 11 11 11 11 11 11], shape=(24,), dtype=int32)>
using jax
ohv shape (24,)
ohv [ 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 11 11 11 11 11 11 11 11]
| MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
Expected Output **Expected Values**```random.rotations.shape (5, 3, 2)random_rotations reshaped (5, 6)rotated_vecs1 (8, 6)rotated_vecs2 (8, 3, 2)rotated_vecs3 (3, 8, 2)rotated_vecs.shape (3, 8, 4)buckets.shape (3, 8)buckets ndarray<tf.Tensor([[3 3 3 3 3 3 3 3] [3 3 3 3 3 3 3 3] [3 3 3 3 3 3 3 3]], shape=(3, 8), dtype=int32)>buckets with offsets (24,) ndarrayohv shape (24,)ohv ndarrayusing jaxohv shape (24,)ohv [ 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 11 11 11 11 11 11 11 11]``` Completed code for reference ``` since this notebook is ungraded the completed code is provided here for referencedef our_hash_vectors(vecs, rng, n_buckets, n_hashes, mask=None, verbose=False): """ Args: vecs: tensor of at least 2 dimension, rng: random number generator n_buckets: number of buckets in each hash table n_hashes: the number of hash tables mask: None indicating no mask or a 1D boolean array of length vecs.shape[0], containing the location of padding value verbose: controls prints for debug Returns: A vector of size n_hashes * vecs.shape[0] containing the buckets associated with each input vector per hash table. """ check for even, integer bucket sizes assert isinstance(n_buckets, int) and n_buckets % 2 == 0 rng = fastmath.stop_gradient(tie_in(vecs, rng)) rot_size = n_buckets Start Code Here Step 1 rotations_shape = (vecs.shape[-1], n_hashes, rot_size // 2) random_rotations = fastmath.random.normal(rng, rotations_shape).astype( np.float32) if verbose: print("random.rotations.shape", random_rotations.shape) Step 2 if fastmath.backend_name() == 'jax': rotated_vecs = np.einsum('tf,fhb->htb', vecs, random_rotations) if verbose: print("using jax") else: Step 2a random_rotations = np.reshape(random_rotations, [-1, n_hashes * (rot_size // 2)]) if verbose: print("random_rotations reshaped", random_rotations.shape) Step 2b rotated_vecs = np.dot(vecs, random_rotations) if verbose: print("rotated_vecs1", rotated_vecs.shape) Step 2c rotated_vecs = np.reshape(rotated_vecs, [-1, n_hashes, rot_size//2]) if verbose: print("rotated_vecs2", rotated_vecs.shape) Step 2d rotated_vecs = np.transpose(rotated_vecs, (1, 0, 2)) if verbose: print("rotated_vecs3", rotated_vecs.shape) Step 3 rotated_vecs = np.concatenate([rotated_vecs, -rotated_vecs], axis=-1) if verbose: print("rotated_vecs.shape", rotated_vecs.shape) Step 4 buckets = np.argmax(rotated_vecs, axis=-1).astype(np.int32) if verbose: print("buckets.shape", buckets.shape) if verbose: print("buckets", buckets) if mask is not None: n_buckets += 1 Create an extra bucket for padding tokens only buckets = np.where(mask[None, :], buckets, n_buckets - 1) buckets is now (n_hashes, seqlen). Next we add offsets so that bucket numbers from different hashing rounds don't overlap. offsets = tie_in(buckets, np.arange(n_hashes, dtype=np.int32)) offsets = np.reshape(offsets * n_buckets, (-1, 1)) Step 5 buckets = np.reshape(buckets + offsets, (-1,)) if verbose: print("buckets with offsets", buckets.shape, "\n", buckets) return buckets``` Part 3.3 Sorting Buckets Great! Now that we have a hash function, we can work on sorting our buckets and performing our matrix operations. We'll walk through this algorithm in small steps:* sort_buckets - we'll perform the sort* softmax* dotandv - do the matrix math to form the dotproduct and outputThese routines will demonstrate a simplified version of the algorithm. We won't address masking and variable bucket sizes but will consider how they would be handled.**sort_buckets**At this point, we have called the hash function and were returned the associated buckets. For example, if we started with`q[n_seq,n_q]`, with `n_hash = 2; n_buckets = 4; n_seq = 8`we might be returned:`bucket = [0,1,2,3,0,1,2,3, 4,5,6,7,4,5,6,7] `Note that it is n_hash\*n_seq long and that the bucket values for each hash have been offset by n_hash so the numbers do not overlap. Going forward, we going to sort this array of buckets to group together members of the same (hash,bucket) pair.**Instructions****Step 1** Our goal is to sort $q$ rather than the bucket list, so we will need to track the association of the buckets to their elements in $q$.* using np.arange, create `ticker`, just a sequence of numbers (0..n_hashed * seqlen) associating members of q with their bucket.**Step 2** This step is provided to you as it is a bit difficult to describe. We want to disambiguate elements that map to the same bucket. When a sorting routine encounters a situation where multiple entries have the same value, it can correctly choose any entry to go first. This makes testing ambiguous. This prevents that. We multiply all the buckets by `seqlen` and then add `ticker % seqlen`**Step 3** Here we are! Ready to sort. This is the exciting part.* Utilize [fastmath.sort_key_val](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.sort_key_val.htmljax.lax.sort_key_val) and sort `buckets_and_t` and `ticker`.**Step 4** We need to be able to undo the sort at the end to get things back into their correct locations* sort `sticker` and `ticker` to for the reverse map**Step 5** create our sorted q and sorted v* use [np.take](https://numpy.org/doc/stable/reference/generated/numpy.take.html) and `st` to grab correct values in `q` for the sorted values, `sq`. Use axis=0.Use the example code below the routine to check and help debug your results. | def sort_buckets(buckets, q, v, n_buckets, n_hashes, seqlen, verbose=True):
"""
Args:
buckets: tensor of at least 2 dimension,
n_buckets: number of buckets in each hash table
n_hashes: the number of hash tables
"""
if verbose:
print("---sort_buckets--")
## Step 1
ticker = np.arange(n_hashes * seqlen)
if verbose:
print("ticker", ticker.shape, ticker)
## Step 2
buckets_and_t = seqlen * buckets + (ticker % seqlen) # provided
if verbose:
print("buckets_and_t", buckets_and_t.shape, buckets_and_t)
# Hash-based sort ("s" at the start of variable names means "sorted")
# Step 3
sbuckets_and_t, sticker = fastmath.sort_key_val(
buckets_and_t, ticker, dimension=-1)
if verbose:
print("sbuckets_and_t", sbuckets_and_t.shape, sbuckets_and_t)
if verbose:
print("sticker", sticker.shape, sticker)
# Step 4
_, undo_sort = fastmath.sort_key_val(sticker, ticker, dimension=-1)
if verbose:
print("undo_sort", undo_sort.shape, undo_sort)
# Step 5
st = sticker % seqlen # provided
sq = np.take(q, st, axis=0)
sv = np.take(v, st, axis=0)
return sq, sv, sticker, undo_sort
t_n_hashes = 2
t_n_buckets = 4
t_n_seq = t_seqlen = 8
t_n_q = 3
n_v = 5
t_q = (np.array([(j % t_n_buckets) for j in range(t_n_seq)]) * np.ones((t_n_q, 1))).T
t_v = np.ones((t_n_seq, n_v))
t_buckets = np.array(
[
(j % t_n_buckets) + t_n_buckets * i
for i in range(t_n_hashes)
for j in range(t_n_seq)
]
)
print("q\n", t_q)
print("t_buckets: ", t_buckets)
t_sq, t_sv, t_sticker, t_undo_sort = sort_buckets(
t_buckets, t_q, t_v, t_n_buckets, t_n_hashes, t_seqlen, verbose=True
)
print("sq.shape", t_sq.shape, "sv.shape", t_sv.shape)
print("sq\n", t_sq) | q
[[0. 0. 0.]
[1. 1. 1.]
[2. 2. 2.]
[3. 3. 3.]
[0. 0. 0.]
[1. 1. 1.]
[2. 2. 2.]
[3. 3. 3.]]
t_buckets: [0 1 2 3 0 1 2 3 4 5 6 7 4 5 6 7]
---sort_buckets--
ticker (16,) [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]
buckets_and_t (16,) [ 0 9 18 27 4 13 22 31 32 41 50 59 36 45 54 63]
sbuckets_and_t (16,) [ 0 4 9 13 18 22 27 31 32 36 41 45 50 54 59 63]
sticker (16,) [ 0 4 1 5 2 6 3 7 8 12 9 13 10 14 11 15]
undo_sort (16,) [ 0 2 4 6 1 3 5 7 8 10 12 14 9 11 13 15]
sq.shape (16, 3) sv.shape (16, 5)
sq
[[0. 0. 0.]
[0. 0. 0.]
[1. 1. 1.]
[1. 1. 1.]
[2. 2. 2.]
[2. 2. 2.]
[3. 3. 3.]
[3. 3. 3.]
[0. 0. 0.]
[0. 0. 0.]
[1. 1. 1.]
[1. 1. 1.]
[2. 2. 2.]
[2. 2. 2.]
[3. 3. 3.]
[3. 3. 3.]]
| MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
Expected Output **Expected Values**```q [[0. 0. 0.] [1. 1. 1.] [2. 2. 2.] [3. 3. 3.] [0. 0. 0.] [1. 1. 1.] [2. 2. 2.] [3. 3. 3.]]t_buckets: [0 1 2 3 0 1 2 3 4 5 6 7 4 5 6 7]---sort_buckets--ticker (16,) [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]buckets_and_t (16,) [ 0 9 18 27 4 13 22 31 32 41 50 59 36 45 54 63]sbuckets_and_t (16,) [ 0 4 9 13 18 22 27 31 32 36 41 45 50 54 59 63]sticker (16,) [ 0 4 1 5 2 6 3 7 8 12 9 13 10 14 11 15]undo_sort (16,) [ 0 2 4 6 1 3 5 7 8 10 12 14 9 11 13 15]sq.shape (16, 3) sv.shape (16, 5)sq [[0. 0. 0.] [0. 0. 0.] [1. 1. 1.] [1. 1. 1.] [2. 2. 2.] [2. 2. 2.] [3. 3. 3.] [3. 3. 3.] [0. 0. 0.] [0. 0. 0.] [1. 1. 1.] [1. 1. 1.] [2. 2. 2.] [2. 2. 2.] [3. 3. 3.] [3. 3. 3.]]``` Completed code for reference ``` since this notebook is ungraded the completed code is provided here for referencedef sort_buckets(buckets, q, v, n_buckets, n_hashes, seqlen, verbose=True): """ Args: buckets: tensor of at least 2 dimension, n_buckets: number of buckets in each hash table n_hashes: the number of hash tables """ if verbose: print("---sort_buckets--") Step 1 ticker = np.arange(n_hashes * seqlen) if verbose: print("ticker",ticker.shape, ticker) Step 2 buckets_and_t = seqlen * buckets + (ticker % seqlen) if verbose: print("buckets_and_t",buckets_and_t.shape, buckets_and_t) Hash-based sort ("s" at the start of variable names means "sorted") Step 3 sbuckets_and_t, sticker = fastmath.sort_key_val( buckets_and_t, ticker, dimension=-1) if verbose: print("sbuckets_and_t",sbuckets_and_t.shape, sbuckets_and_t) if verbose: print("sticker",sticker.shape, sticker) Step 4 _, undo_sort = fastmath.sort_key_val(sticker, ticker, dimension=-1) if verbose: print("undo_sort",undo_sort.shape, undo_sort) Step 4 st = (sticker % seqlen) sq = np.take(q, st, axis=0) sv = np.take(v, st, axis=0) return sq, sv, sticker, undo_sort``` Part 3.4 Chunked dot product attention Now let's create the dot product attention. We have sorted $Q$ so that elements that the hash has determined are likely to be similar are adjacent to each other. We now want to perform the dot-product within those limited regions - in 'chunks'.Figure 11: Performing dot product in 'chunks' The example we have been working on is shown above, with sequences of 8, 2 hashes, 4 buckets and, conveniently, the content of Q was such that when sorted, there were 2 entries in each bucket. If we reshape Q into a (8,2,n_q), we can use numpy matmul to perform the operation. Numpy [matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html) will treat the inputs as a stack of matrices residing in the last two indexes. This will allow us to matrix multiply Q with itself in *chunks* and later can also be used to perform the matrix multiply with v.We will perform a softmax on the output of the dot product of Q and Q, but in this case, there is a bit more to the story. Recall the output of the hash had multiple hash tables. We will perform softmax on those separately and then must combine them. This is where the form of softmax we defined at the top of the notebook comes into play. The routines below will utilize the logsumexp values that the `our_softmax` routine calculates.There is a good deal of [reshaping](https://numpy.org/doc/stable/reference/generated/numpy.reshape.html) to get things into the right formats. The code has many print statements that match the expected values below. You can use those to check your work as you go along. If you don't do a lot of 3-dimensional matrix multiplications in your daily life, it might be worthwhile to open a spare cell and practice a few simple examples to get the hang of it! Here is one to start with: | a = np.arange(16 * 3).reshape((16, 3))
chunksize = 2
ar = np.reshape(
a, (-1, chunksize, a.shape[-1])
) # the -1 usage is very handy, see numpy reshape
print(ar.shape) | (8, 2, 3)
| MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
**Instructions****Step 1** Reshaping Q* np.reshape `sq` (sorted q) to be 3 dimensions. The middle dimension is the size of the 'chunk' specified by `kv_chunk_len`* np.swapaxes to perform a 'transpose' on the reshaped `sq`, *but only on the last two dimension** np.matmul the two values.**Step 2*** use our_softmax to perform the softmax on the dot product. Don't forget `passthrough`**Step 3*** np.reshape `sv`. Like `sq`, the middle dimension is the size of the 'chunk' specified by `kv_chunk_len`* np.matmul dotlike and the reshaped `sv`* np.reshape so to a two dimensional array with the last dimension stays the same (`so.shape[-1]`)* `logits` also needs reshaping, we'll do that.**Step 4** Now we can undo the sort.* use [np.take](https://numpy.org/doc/stable/reference/generated/numpy.take.html) and `undo_sort` and axis = 0 to unsort so* do the same with `slogits`.**Step 5** This step combines the results of multiple hashes. Recall, the softmax was only over the values in one hash, this extends it to all the hashes. Read through it, the code is provided. Note this is taking place *after* the matrix multiply with v while the softmax output is used before the multiply. How does this achieve the correct result? | def dotandv(sq, sv, undo_sort, kv_chunk_len, n_hashes, seqlen, passthrough, verbose=False ):
# Step 1
rsq = np.reshape(sq,(-1, kv_chunk_len, sq.shape[-1]))
rsqt = np.swapaxes(rsq, -1, -2)
if verbose: print("rsq.shape,rsqt.shape: ", rsq.shape,rsqt.shape)
dotlike = np.matmul(rsq, rsqt)
if verbose: print("dotlike\n", dotlike)
#Step 2
dotlike, slogits = our_softmax(dotlike, passthrough)
if verbose: print("dotlike post softmax\n", dotlike)
#Step 3
vr = np.reshape(sv, (-1, kv_chunk_len, sv.shape[-1]))
if verbose: print("dotlike.shape, vr.shape:", dotlike.shape, vr.shape)
so = np.matmul(dotlike, vr)
if verbose: print("so.shape:", so.shape)
so = np.reshape(so, (-1, so.shape[-1]))
slogits = np.reshape(slogits, (-1,)) # provided
if verbose: print("so.shape,slogits.shape", so.shape, slogits.shape)
#Step 4
o = np.take(so, undo_sort, axis=0)
logits = np.take(slogits, undo_sort, axis=0)
if verbose: print("o.shape,o", o.shape, o)
if verbose: print("logits.shape, logits", logits.shape, logits)
#Step 5 (Provided)
if n_hashes > 1:
o = np.reshape(o, (n_hashes, seqlen, o.shape[-1]))
logits = np.reshape(logits, (n_hashes, seqlen, 1))
probs = np.exp(logits - fastmath.logsumexp(logits, axis=0, keepdims=True))
o = np.sum(o * probs, axis=0)
return(o)
t_kv_chunk_len = 2
out = dotandv(
t_sq,
t_sv,
t_undo_sort,
t_kv_chunk_len,
t_n_hashes,
t_seqlen,
passthrough=True,
verbose=True,
)
print("out\n", out)
print("\n-----With softmax enabled----\n")
out = dotandv(
t_sq,
t_sv,
t_undo_sort,
t_kv_chunk_len,
t_n_hashes,
t_seqlen,
passthrough=False,
verbose=True,
)
print("out\n", out) | rsq.shape,rsqt.shape: (8, 2, 3) (8, 3, 2)
dotlike
[[[ 0. 0.]
[ 0. 0.]]
[[ 3. 3.]
[ 3. 3.]]
[[12. 12.]
[12. 12.]]
[[27. 27.]
[27. 27.]]
[[ 0. 0.]
[ 0. 0.]]
[[ 3. 3.]
[ 3. 3.]]
[[12. 12.]
[12. 12.]]
[[27. 27.]
[27. 27.]]]
dotlike post softmax
[[[ 0. 0.]
[ 0. 0.]]
[[ 3. 3.]
[ 3. 3.]]
[[12. 12.]
[12. 12.]]
[[27. 27.]
[27. 27.]]
[[ 0. 0.]
[ 0. 0.]]
[[ 3. 3.]
[ 3. 3.]]
[[12. 12.]
[12. 12.]]
[[27. 27.]
[27. 27.]]]
dotlike.shape, vr.shape: (8, 2, 2) (8, 2, 5)
so.shape: (8, 2, 5)
so.shape,slogits.shape (16, 5) (16,)
o.shape,o (16, 5) [[ 0. 0. 0. 0. 0.]
[ 6. 6. 6. 6. 6.]
[24. 24. 24. 24. 24.]
[54. 54. 54. 54. 54.]
[ 0. 0. 0. 0. 0.]
[ 6. 6. 6. 6. 6.]
[24. 24. 24. 24. 24.]
[54. 54. 54. 54. 54.]
[ 0. 0. 0. 0. 0.]
[ 6. 6. 6. 6. 6.]
[24. 24. 24. 24. 24.]
[54. 54. 54. 54. 54.]
[ 0. 0. 0. 0. 0.]
[ 6. 6. 6. 6. 6.]
[24. 24. 24. 24. 24.]
[54. 54. 54. 54. 54.]]
logits.shape, logits (16,) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
out
[[ 0. 0. 0. 0. 0.]
[ 6. 6. 6. 6. 6.]
[24. 24. 24. 24. 24.]
[54. 54. 54. 54. 54.]
[ 0. 0. 0. 0. 0.]
[ 6. 6. 6. 6. 6.]
[24. 24. 24. 24. 24.]
[54. 54. 54. 54. 54.]]
-----With softmax enabled----
rsq.shape,rsqt.shape: (8, 2, 3) (8, 3, 2)
dotlike
[[[ 0. 0.]
[ 0. 0.]]
[[ 3. 3.]
[ 3. 3.]]
[[12. 12.]
[12. 12.]]
[[27. 27.]
[27. 27.]]
[[ 0. 0.]
[ 0. 0.]]
[[ 3. 3.]
[ 3. 3.]]
[[12. 12.]
[12. 12.]]
[[27. 27.]
[27. 27.]]]
dotlike post softmax
[[[0.5 0.5 ]
[0.5 0.5 ]]
[[0.5 0.5 ]
[0.5 0.5 ]]
[[0.49999976 0.49999976]
[0.49999976 0.49999976]]
[[0.49999976 0.49999976]
[0.49999976 0.49999976]]
[[0.5 0.5 ]
[0.5 0.5 ]]
[[0.5 0.5 ]
[0.5 0.5 ]]
[[0.49999976 0.49999976]
[0.49999976 0.49999976]]
[[0.49999976 0.49999976]
[0.49999976 0.49999976]]]
dotlike.shape, vr.shape: (8, 2, 2) (8, 2, 5)
so.shape: (8, 2, 5)
so.shape,slogits.shape (16, 5) (16,)
o.shape,o (16, 5) [[1. 1. 1. 1. 1. ]
[1. 1. 1. 1. 1. ]
[0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]
[0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]
[1. 1. 1. 1. 1. ]
[1. 1. 1. 1. 1. ]
[0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]
[0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]
[1. 1. 1. 1. 1. ]
[1. 1. 1. 1. 1. ]
[0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]
[0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]
[1. 1. 1. 1. 1. ]
[1. 1. 1. 1. 1. ]
[0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]
[0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]]
logits.shape, logits (16,) [ 0.6931472 3.6931472 12.693148 27.693148 0.6931472 3.6931472
12.693148 27.693148 0.6931472 3.6931472 12.693148 27.693148
0.6931472 3.6931472 12.693148 27.693148 ]
out
[[1. 1. 1. 1. 1. ]
[1. 1. 1. 1. 1. ]
[0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]
[0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]
[1. 1. 1. 1. 1. ]
[1. 1. 1. 1. 1. ]
[0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]
[0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]]
| MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
Expected Output **Expected Values**```rsq.shape,rsqt.shape: (8, 2, 3) (8, 3, 2)dotlike [[[ 0. 0.] [ 0. 0.]] [[ 3. 3.] [ 3. 3.]] [[12. 12.] [12. 12.]] [[27. 27.] [27. 27.]] [[ 0. 0.] [ 0. 0.]] [[ 3. 3.] [ 3. 3.]] [[12. 12.] [12. 12.]] [[27. 27.] [27. 27.]]]dotlike post softmax [[[ 0. 0.] [ 0. 0.]] [[ 3. 3.] [ 3. 3.]] [[12. 12.] [12. 12.]] [[27. 27.] [27. 27.]] [[ 0. 0.] [ 0. 0.]] [[ 3. 3.] [ 3. 3.]] [[12. 12.] [12. 12.]] [[27. 27.] [27. 27.]]]dotlike.shape, vr.shape: (8, 2, 2) (8, 2, 5)so.shape: (8, 2, 5)so.shape,slogits.shape (16, 5) (16,)o.shape,o (16, 5) [[ 0. 0. 0. 0. 0.] [ 6. 6. 6. 6. 6.] [24. 24. 24. 24. 24.] [54. 54. 54. 54. 54.] [ 0. 0. 0. 0. 0.] [ 6. 6. 6. 6. 6.] [24. 24. 24. 24. 24.] [54. 54. 54. 54. 54.] [ 0. 0. 0. 0. 0.] [ 6. 6. 6. 6. 6.] [24. 24. 24. 24. 24.] [54. 54. 54. 54. 54.] [ 0. 0. 0. 0. 0.] [ 6. 6. 6. 6. 6.] [24. 24. 24. 24. 24.] [54. 54. 54. 54. 54.]]logits.shape, logits (16,) [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]out [[ 0. 0. 0. 0. 0.] [ 6. 6. 6. 6. 6.] [24. 24. 24. 24. 24.] [54. 54. 54. 54. 54.] [ 0. 0. 0. 0. 0.] [ 6. 6. 6. 6. 6.] [24. 24. 24. 24. 24.] [54. 54. 54. 54. 54.]]-----With softmax enabled----rsq.shape,rsqt.shape: (8, 2, 3) (8, 3, 2)dotlike [[[ 0. 0.] [ 0. 0.]] [[ 3. 3.] [ 3. 3.]] [[12. 12.] [12. 12.]] [[27. 27.] [27. 27.]] [[ 0. 0.] [ 0. 0.]] [[ 3. 3.] [ 3. 3.]] [[12. 12.] [12. 12.]] [[27. 27.] [27. 27.]]]dotlike post softmax [[[0.5 0.5 ] [0.5 0.5 ]] [[0.5 0.5 ] [0.5 0.5 ]] [[0.49999976 0.49999976] [0.49999976 0.49999976]] [[0.49999976 0.49999976] [0.49999976 0.49999976]] [[0.5 0.5 ] [0.5 0.5 ]] [[0.5 0.5 ] [0.5 0.5 ]] [[0.49999976 0.49999976] [0.49999976 0.49999976]] [[0.49999976 0.49999976] [0.49999976 0.49999976]]]dotlike.shape, vr.shape: (8, 2, 2) (8, 2, 5)so.shape: (8, 2, 5)so.shape,slogits.shape (16, 5) (16,)o.shape,o (16, 5) [[1. 1. 1. 1. 1. ] [1. 1. 1. 1. 1. ] [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995] [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995] [1. 1. 1. 1. 1. ] [1. 1. 1. 1. 1. ] [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995] [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995] [1. 1. 1. 1. 1. ] [1. 1. 1. 1. 1. ] [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995] [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995] [1. 1. 1. 1. 1. ] [1. 1. 1. 1. 1. ] [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995] [0.9999995 0.9999995 0.9999995 0.9999995 0.9999995]]logits.shape, logits (16,) [ 0.6931472 3.6931472 12.693148 27.693148 0.6931472 3.6931472 12.693148 27.693148 0.6931472 3.6931472 12.693148 27.693148 0.6931472 3.6931472 12.693148 27.693148 ]out [[1. 1. 1. 1. 1. ] [1. 1. 1. 1. 1. ] [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905] [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905] [1. 1. 1. 1. 1. ] [1. 1. 1. 1. 1. ] [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905] [0.99999905 0.99999905 0.99999905 0.99999905 0.99999905]]``` Completed code for reference ``` since this notebook is ungraded the completed code is provided here for referencedef dotandv(sq, sv, undo_sort, kv_chunk_len, n_hashes, seqlen, passthrough, verbose=False ): Step 1 rsq = np.reshape(sq,(-1, kv_chunk_len, sq.shape[-1])) rsqt = np.swapaxes(rsq, -1, -2) if verbose: print("rsq.shape,rsqt.shape: ", rsq.shape,rsqt.shape) dotlike = np.matmul(rsq, rsqt) if verbose: print("dotlike\n", dotlike) Step 2 dotlike, slogits = our_softmax(dotlike, passthrough) if verbose: print("dotlike post softmax\n", dotlike) Step 3 vr = np.reshape(sv, (-1, kv_chunk_len, sv.shape[-1])) if verbose: print("dotlike.shape, vr.shape:", dotlike.shape, vr.shape) so = np.matmul(dotlike, vr) if verbose: print("so.shape:", so.shape) so = np.reshape(so, (-1, so.shape[-1])) slogits = np.reshape(slogits, (-1,)) provided if verbose: print("so.shape,slogits.shape", so.shape, slogits.shape) Step 4 o = np.take(so, undo_sort, axis=0) logits = np.take(slogits, undo_sort, axis=0) if verbose: print("o.shape,o", o.shape, o) if verbose: print("logits.shape, logits", logits.shape, logits) Step 5 (Provided) if n_hashes > 1: o = np.reshape(o, (n_hashes, seqlen, o.shape[-1])) logits = np.reshape(logits, (n_hashes, seqlen, 1)) probs = np.exp(logits - fastmath.logsumexp(logits, axis=0, keepdims=True)) o = np.sum(o * probs, axis=0) return(o)``` Great! You have now done examples code for most of the operation that are unique to the LSH version of self-attention. I'm sure at this point you are wondering what happens if the number of entries in a bucket is not evenly distributed the way our example is. It is possible, for example for all of the `seqlen` entries to land in one bucket. Further, since the buckets are not aligned, our 'chunks' may be misaligned with the start of the bucket. The implementation addresses this by attending to adjacent chunks as was described in the lecture:Figure 12: Misaligned Access, looking before and after Hopefully, having implemented parts of this, you will appreciate this diagram more fully. Part 3.5 OurLSHSelfAttentionYou can examine the full implementations below. Area's we did not 'attend to' in our implementations above include variable bucket sizes and masking. We will instantiate a layer of the full implementation below. We tried to use the same variable names above to make it easier to decipher the full version. Note that some of the functionality we implemented in our routines is split between `attend` and `forward_unbatched`. We've inserted our version of hash below, but use the original version of `attend`. | # original version from trax 1.3.4
def attend(
q,
k=None,
v=None,
q_chunk_len=None,
kv_chunk_len=None,
n_chunks_before=0,
n_chunks_after=0,
mask_fn=None,
q_info=None,
kv_info=None,
dropout=0.0,
rng=None,
):
"""Dot-product attention, with optional chunking and/or masking.
Args:
q: Query vectors, shape [q_len, d_qk]
k: Key vectors, shape [kv_len, d_qk]; or None
v: Value vectors, shape [kv_len, d_v]
q_chunk_len: Set to non-zero to enable chunking for query vectors
kv_chunk_len: Set to non-zero to enable chunking for key/value vectors
n_chunks_before: Number of adjacent previous chunks to attend to
n_chunks_after: Number of adjacent subsequent chunks to attend to
mask_fn: TODO(kitaev) doc
q_info: Query-associated metadata for masking
kv_info: Key-associated metadata for masking
dropout: Dropout rate
rng: RNG for dropout
Returns:
A tuple (output, dots_logsumexp). The output has shape [q_len, d_v], and
dots_logsumexp has shape [q_len]. The logsumexp of the attention
probabilities is useful for combining multiple rounds of attention (as in
LSH attention).
"""
assert v is not None
share_qk = k is None
if q_info is None:
q_info = np.arange(q.shape[-2], dtype=np.int32)
if kv_info is None and not share_qk:
kv_info = np.arange(v.shape[-2], dtype=np.int32)
# Split q/k/v into chunks along the time axis, if desired.
if q_chunk_len is not None:
q = np.reshape(q, (-1, q_chunk_len, q.shape[-1]))
q_info = np.reshape(q_info, (-1, q_chunk_len))
if share_qk:
assert kv_chunk_len is None or kv_chunk_len == q_chunk_len
k = q
kv_chunk_len = q_chunk_len
if kv_info is None:
kv_info = q_info
elif kv_chunk_len is not None:
# kv_info is not None, but reshape as required.
kv_info = np.reshape(kv_info, (-1, kv_chunk_len))
elif kv_chunk_len is not None:
k = np.reshape(k, (-1, kv_chunk_len, k.shape[-1]))
kv_info = np.reshape(kv_info, (-1, kv_chunk_len))
if kv_chunk_len is not None:
v = np.reshape(v, (-1, kv_chunk_len, v.shape[-1]))
if share_qk:
k = length_normalized(k)
k = k / np.sqrt(k.shape[-1])
# Optionally include adjacent chunks.
if q_chunk_len is not None or kv_chunk_len is not None:
assert q_chunk_len is not None and kv_chunk_len is not None
else:
assert n_chunks_before == 0 and n_chunks_after == 0
k = look_adjacent(k, n_chunks_before, n_chunks_after)
v = look_adjacent(v, n_chunks_before, n_chunks_after)
kv_info = look_adjacent(kv_info, n_chunks_before, n_chunks_after)
# Dot-product attention.
dots = np.matmul(q, np.swapaxes(k, -1, -2))
# Masking
if mask_fn is not None:
dots = mask_fn(dots, q_info[..., :, None], kv_info[..., None, :])
# Softmax.
dots_logsumexp = fastmath.logsumexp(dots, axis=-1, keepdims=True)
dots = np.exp(dots - dots_logsumexp)
if dropout > 0.0:
assert rng is not None
# Dropout is broadcast across the bin dimension
dropout_shape = (dots.shape[-2], dots.shape[-1])
#
keep_prob = tie_in(dots, 1.0 - dropout)
keep = fastmath.random.bernoulli(rng, keep_prob, dropout_shape)
multiplier = keep.astype(dots.dtype) / tie_in(keep, keep_prob)
dots = dots * multiplier
# The softmax normalizer (dots_logsumexp) is used by multi-round LSH attn.
out = np.matmul(dots, v)
out = np.reshape(out, (-1, out.shape[-1]))
dots_logsumexp = np.reshape(dots_logsumexp, (-1,))
return out, dots_logsumexp
class OurLSHSelfAttention(tl.LSHSelfAttention):
"""Our simplified LSH self-attention """
def forward_unbatched(self, x, mask=None, *, weights, state, rng, update_state):
attend_rng, output_rng = fastmath.random.split(rng)
w_q, w_v, w_o = weights
q = np.matmul(x, w_q)
v = np.matmul(x, w_v)
if update_state:
_, old_hash_rng = state
hash_rng, hash_subrng = fastmath.random.split(old_hash_rng)
# buckets = self.hash_vectors(q, hash_subrng, mask) # original
## use our version of hash
buckets = our_hash_vectors(
q, hash_subrng, self.n_buckets, self.n_hashes, mask=mask
)
s_buckets = buckets
if self._max_length_for_buckets:
length = self.n_hashes * self._max_length_for_buckets
if buckets.shape[0] < length:
s_buckets = np.concatenate(
[buckets, np.zeros(length - buckets.shape[0], dtype=np.int32)],
axis=0,
)
state = (s_buckets, hash_rng)
else:
buckets, _ = state
if self._max_length_for_buckets:
buckets = buckets[: self.n_hashes * x.shape[0]]
seqlen = x.shape[0]
assert int(buckets.shape[0]) == self.n_hashes * seqlen
ticker = tie_in(x, np.arange(self.n_hashes * seqlen, dtype=np.int32))
buckets_and_t = seqlen * buckets + (ticker % seqlen)
buckets_and_t = fastmath.stop_gradient(buckets_and_t)
# Hash-based sort ("s" at the start of variable names means "sorted")
sbuckets_and_t, sticker = fastmath.sort_key_val(
buckets_and_t, ticker, dimension=-1
)
_, undo_sort = fastmath.sort_key_val(sticker, ticker, dimension=-1)
sbuckets_and_t = fastmath.stop_gradient(sbuckets_and_t)
sticker = fastmath.stop_gradient(sticker)
undo_sort = fastmath.stop_gradient(undo_sort)
st = sticker % seqlen
sq = np.take(q, st, axis=0)
sv = np.take(v, st, axis=0)
mask_fn = functools.partial(
mask_self_attention,
causal=self.causal,
exclude_self=True,
masked=self.masked,
)
q_info = st
assert (mask is not None) == self.masked
kv_info = None
if self.masked:
# mask is a boolean array (True means "is valid token")
smask = np.take(mask, st, axis=0)
ones_like_mask = tie_in(x, np.ones_like(smask, dtype=np.int32))
kv_info = q_info * np.where(smask, ones_like_mask, -ones_like_mask)
## use original version of attend (could use ours but lacks masks and masking)
so, slogits = attend(
sq,
k=None,
v=sv,
q_chunk_len=self.chunk_len,
n_chunks_before=self.n_chunks_before,
n_chunks_after=self.n_chunks_after,
mask_fn=mask_fn,
q_info=q_info,
kv_info=kv_info,
dropout=self.attention_dropout,
rng=attend_rng,
)
# np.take(so, undo_sort, axis=0); np.take(slogits, undo_sort, axis=0) would
# also work, but these helpers include performance optimizations for TPU.
o = permute_via_gather(so, undo_sort, sticker, axis=0)
logits = permute_via_sort(slogits, sticker, buckets_and_t, axis=-1)
if self.n_hashes > 1:
o = np.reshape(o, (self.n_hashes, seqlen, o.shape[-1]))
logits = np.reshape(logits, (self.n_hashes, seqlen, 1))
probs = np.exp(logits - fastmath.logsumexp(logits, axis=0, keepdims=True))
o = np.sum(o * probs, axis=0)
assert o.shape == (seqlen, w_v.shape[-1])
out = np.matmul(o, w_o)
out = apply_broadcasted_dropout(out, self.output_dropout, output_rng)
return out, state
# Here we're going to try out our LSHSelfAttention
n_heads = 3
causal = False
masked = False
mask = None
chunk_len = 8
n_chunks_before = 0
n_chunks_after = 0
attention_dropout = 0.0
n_hashes = 5
n_buckets = 4
seq_len = 8
emb_len = 5
al = OurLSHSelfAttention(
n_heads=n_heads,
d_qk=3,
d_v=4,
causal=causal,
chunk_len=8,
n_chunks_before=n_chunks_before,
n_chunks_after=n_chunks_after,
n_hashes=n_hashes,
n_buckets=n_buckets,
use_reference_code=True,
attention_dropout=attention_dropout,
mode="train",
)
x = jax.random.uniform(jax.random.PRNGKey(0), (1, seq_len, emb_len), dtype=np.float32)
al_osa = fastmath.random.get_prng(1)
_, _ = al.init(tl.shapes.signature(x), rng=al_osa)
al(x) | using jax
using jax
using jax
| MIT | Natural Language Processing Specialization/chatbot/C4_W4_Ungraded_Lab_Reformer_LSH.ipynb | aibenStunner/NLP-specialization |
Exercise 5 - Variational quantum eigensolver Historical backgroundDuring the last decade, quantum computers matured quickly and began to realize Feynman's initial dream of a computing system that could simulate the laws of nature in a quantum way. A 2014 paper first authored by Alberto Peruzzo introduced the **Variational Quantum Eigensolver (VQE)**, an algorithm meant for finding the ground state energy (lowest energy) of a molecule, with much shallower circuits than other approaches.[1] And, in 2017, the IBM Quantum team used the VQE algorithm to simulate the ground state energy of the lithium hydride molecule.[2]VQE's magic comes from outsourcing some of the problem's processing workload to a classical computer. The algorithm starts with a parameterized quantum circuit called an ansatz (a best guess) then finds the optimal parameters for this circuit using a classical optimizer. The VQE's advantage over classical algorithms comes from the fact that a quantum processing unit can represent and store the problem's exact wavefunction, an exponentially hard problem for a classical computer. This exercise 5 allows you to realize Feynman's dream yourself, setting up a variational quantum eigensolver to determine the ground state and the energy of a molecule. This is interesting because the ground state can be used to calculate various molecular properties, for instance the exact forces on nuclei than can serve to run molecular dynamics simulations to explore what happens in chemical systems with time.[3] References1. Peruzzo, Alberto, et al. "A variational eigenvalue solver on a photonic quantum processor." Nature communications 5.1 (2014): 1-7.2. Kandala, Abhinav, et al. "Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets." Nature 549.7671 (2017): 242-246.3. Sokolov, Igor O., et al. "Microcanonical and finite-temperature ab initio molecular dynamics simulations on quantum computers." Physical Review Research 3.1 (2021): 013125. IntroductionFor the implementation of VQE, you will be able to make choices on how you want to compose your simulation, in particular focusing on the ansatz quantum circuits.This is motivated by the fact that one of the important tasks when running VQE on noisy quantum computers is to reduce the loss of fidelity (which introduces errors) by finding the most compact quantum circuit capable of representing the ground state.Practically, this entails to minimizing the number of two-qubit gates (e.g. CNOTs) while not loosing accuracy.Goal Find the shortest ansatz circuits for representing accurately the ground state of given problems. Be creative! Plan First you will learn how to compose a VQE simulation for the smallest molecule and then apply what you have learned to a case of a larger one. **1. Tutorial - VQE for H$_2$:** familiarize yourself with VQE and select the best combination of ansatz/classical optimizer by running statevector simulations.**2. Final Challenge - VQE for LiH:** perform similar investigation as in the first part but restricting to statevector simulator only. Use the qubit number reduction schemes available in Qiskit and find the optimal circuit for this larger system. Optimize the circuit and use your imagination to find ways to select the best building blocks of parameterized circuits and compose them to construct the most compact ansatz circuit for the ground state, better than the ones already available in Qiskit. Below is an introduction to the theory behind VQE simulations. You don't have to understand the whole thing before moving on. Don't be scared! TheoryHere below is the general workflow representing how the molecular simulations using VQE are performed on quantum computers.The core idea hybrid quantum-classical approach is to outsource to **CPU (classical processing unit)** and **QPU (quantum processing unit)** the parts that they can do best. The CPU takes care of listing the terms that need to be measured to compute the energy and also optimizing the circuit parameters. The QPU implements a quantum circuit representing the quantum state of a system and measures the energy. Some more details are given below:**CPU** can compute efficiently the energies associated to electron hopping and interactions (one-/two-body integrals by means of a Hartree-Fock calculation) that serve to represent the total energy operator, Hamiltonian. The [HartreeβFock (HF) method](https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method:~:text=In%20computational%20physics%20and%20chemistry,system%20in%20a%20stationary%20state.) efficiently computes an approximate grounds state wavefunction by assuming that the latter can be represented by a single Slater determinant (e.g. for H$_2$ molecule in STO-3G basis with 4 spin-orbitals and qubits, $|\Psi_{HF} \rangle = |0101 \rangle$ where electrons occupy the lowest energy spin-orbitals). What QPU does later in VQE is finding a quantum state (corresponding circuit and its parameters) that can also represent other states associated missing electronic correlations (i.e. $\sum_i c_i |i\rangle$ states in $|\Psi \rangle = c_{HF}|\Psi_{HF} \rangle + \sum_i c_i |i\rangle $ where $i$ is a bitstring). After a HF calculation, operators in the Hamiltonian are mapped to measurements on a QPU using fermion-to-qubit transformations (see Hamiltonian section below). One can further analyze the properties of the system to reduce the number of qubits or shorten the ansatz circuit:- For Z2 symmetries and two-qubit reduction, see [Bravyi *et al*, 2017](https://arxiv.org/abs/1701.08213v1).- For entanglement forging, see [Eddins *et al.*, 2021](https://arxiv.org/abs/2104.10220v1).- For the adaptive ansatz see, [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). You may use the ideas found in those works to find ways to shorten the quantum circuits.**QPU** implements quantum circuits (see Ansatzes section below), parameterized by angles $\vec\theta$, that would represent the ground state wavefunction by placing various single qubit rotations and entanglers (e.g. two-qubit gates). The quantum advantage lies in the fact that QPU can efficiently represent and store the exact wavefunction, which becomes intractable on a classical computer for systems that have more than a few atoms. Finally, QPU measures the operators of choice (e.g. ones representing a Hamiltonian).Below we go slightly more in mathematical details of each component of the VQE algorithm. It might be also helpful if you watch our [video episode about VQE](https://www.youtube.com/watch?v=Z-A6G0WVI9w). Hamiltonian Here we explain how we obtain the operators that we need to measure to obtain the energy of a given system.These terms are included in the molecular Hamiltonian defined as:$$\begin{aligned}\hat{H} &=\sum_{r s} h_{r s} \hat{a}_{r}^{\dagger} \hat{a}_{s} \\&+\frac{1}{2} \sum_{p q r s} g_{p q r s} \hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}+E_{N N}\end{aligned}$$with$$h_{p q}=\int \phi_{p}^{*}(r)\left(-\frac{1}{2} \nabla^{2}-\sum_{I} \frac{Z_{I}}{R_{I}-r}\right) \phi_{q}(r)$$$$g_{p q r s}=\int \frac{\phi_{p}^{*}\left(r_{1}\right) \phi_{q}^{*}\left(r_{2}\right) \phi_{r}\left(r_{2}\right) \phi_{s}\left(r_{1}\right)}{\left|r_{1}-r_{2}\right|} $$where the $h_{r s}$ and $g_{p q r s}$ are the one-/two-body integrals (using the Hartree-Fock method) and $E_{N N}$ the nuclear repulsion energy. The one-body integrals represent the kinetic energy of the electrons and their interaction with nuclei. The two-body integrals represent the electron-electron interaction.The $\hat{a}_{r}^{\dagger}, \hat{a}_{r}$ operators represent creation and annihilation of electron in spin-orbital $r$ and require mappings to operators, so that we can measure them on a quantum computer.Note that VQE minimizes the electronic energy so you have to retrieve and add the nuclear repulsion energy $E_{NN}$ to compute the total energy. So, for every non-zero matrix element in the $ h_{r s}$ and $g_{p q r s}$ tensors, we can construct corresponding Pauli string (tensor product of Pauli operators) with the following fermion-to-qubit transformation. For instance, in Jordan-Wigner mapping for an orbital $r = 3$, we obtain the following Pauli string:$$\hat a_{3}^{\dagger}= \hat \sigma_z \otimes \hat \sigma_z \otimes\left(\frac{ \hat \sigma_x-i \hat \sigma_y}{2}\right) \otimes 1 \otimes \cdots \otimes 1$$where $\hat \sigma_x, \hat \sigma_y, \hat \sigma_z$ are the well-known Pauli operators. The tensor products of $\hat \sigma_z$ operators are placed to enforce the fermionic anti-commutation relations.A representation of the Jordan-Wigner mapping between the 14 spin-orbitals of a water molecule and some 14 qubits is given below:Then, one simply replaces the one-/two-body excitations (e.g. $\hat{a}_{r}^{\dagger} \hat{a}_{s}$, $\hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}$) in the Hamiltonian by corresponding Pauli strings (i.e. $\hat{P}_i$, see picture above). The resulting operator set is ready to be measured on the QPU.For additional details see [Seeley *et al.*, 2012](https://arxiv.org/abs/1208.5986v1). AnsatzesThere are mainly 2 types of ansatzes you can use for chemical problems. - **q-UCC ansatzes** are physically inspired, and roughly map the electron excitations to quantum circuits. The q-UCCSD ansatz (`UCCSD`in Qiskit) possess all possible single and double electron excitations. The paired double q-pUCCD (`PUCCD`) and singlet q-UCCD0 (`SUCCD`) just consider a subset of such excitations (meaning significantly shorter circuits) and have proved to provide good results for dissociation profiles. For instance, q-pUCCD doesn't have single excitations and the double excitations are paired as in the image below.- **Heuristic ansatzes (`TwoLocal`)** were invented to shorten the circuit depth but still be able to represent the ground state. As in the figure below, the R gates represent the parametrized single qubit rotations and $U_{CNOT}$ the entanglers (two-qubit gates). The idea is that after repeating certain $D$-times the same block (with independent parameters) one can reach the ground state. For additional details refer to [Sokolov *et al.* (q-UCC ansatzes)](https://arxiv.org/abs/1911.10864v2) and [Barkoutsos *et al.* (Heuristic ansatzes)](https://arxiv.org/pdf/1805.04340.pdf). VQEGiven a Hermitian operator $\hat H$ with an unknown minimum eigenvalue $E_{min}$, associated with the eigenstate $|\psi_{min}\rangle$, VQE provides an estimate $E_{\theta}$, bounded by $E_{min}$:\begin{align*} E_{min} \le E_{\theta} \equiv \langle \psi(\theta) |\hat H|\psi(\theta) \rangle\end{align*} where $|\psi(\theta)\rangle$ is the trial state associated with $E_{\theta}$. By applying a parameterized circuit, represented by $U(\theta)$, to some arbitrary starting state $|\psi\rangle$, the algorithm obtains an estimate $U(\theta)|\psi\rangle \equiv |\psi(\theta)\rangle$ on $|\psi_{min}\rangle$. The estimate is iteratively optimized by a classical optimizer by changing the parameter $\theta$ and minimizing the expectation value of $\langle \psi(\theta) |\hat H|\psi(\theta) \rangle$. As applications of VQE, there are possibilities in molecular dynamics simulations, see [Sokolov *et al.*, 2021](https://arxiv.org/abs/2008.08144v1), and excited states calculations, see [Ollitrault *et al.*, 2019](https://arxiv.org/abs/1910.12890) to name a few. References for additional details For the qiskit-nature tutorial that implements this algorithm see [here](https://qiskit.org/documentation/nature/tutorials/01_electronic_structure.html)but this won't be sufficient and you might want to look on the [first page of github repository](https://github.com/Qiskit/qiskit-nature) and the [test folder](https://github.com/Qiskit/qiskit-nature/tree/main/test) containing tests that are written for each component, they provide the base code for the use of each functionality. Part 1: Tutorial - VQE for H$_2$ molecule In this part, you will simulate H$_2$ molecule using the STO-3G basis with the PySCF driver and Jordan-Wigner mapping.We will guide you through the following parts so then you can tackle harder problems. 1. DriverThe interfaces to the classical chemistry codes that are available in Qiskit are called drivers.We have for example `PSI4Driver`, `PyQuanteDriver`, `PySCFDriver` are available. By running a driver (Hartree-Fock calculation for a given basis set and molecular geometry), in the cell below, we obtain all the necessary information about our molecule to apply then a quantum algorithm. | from qiskit_nature.drivers import PySCFDriver
molecule = "H .0 .0 .0; H .0 .0 0.739"
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run() | _____no_output_____ | Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
Tutorial questions 1 Look into the attributes of `qmolecule` and answer the questions below. 1. We need to know the basic characteristics of our molecule. What is the total number of electrons in your system?2. What is the number of molecular orbitals?3. What is the number of spin-orbitals?3. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping?5. What is the value of the nuclear repulsion energy?You can find the answers at the end of this notebook. 2. Electronic structure problemYou can then create an `ElectronicStructureProblem` that can produce the list of fermionic operators before mapping them to qubits (Pauli strings). | from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
problem = ElectronicStructureProblem(driver)
# Generate the second-quantized operators
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0] | _____no_output_____ | Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
3. QubitConverterAllows to define the mapping that you will use in the simulation. You can try different mapping but we will stick to `JordanWignerMapper` as allows a simple correspondence: a qubit represents a spin-orbital in the molecule. | from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
# Setup the mapper and qubit converter
mapper_type = 'JordanWignerMapper'
if mapper_type == 'ParityMapper':
mapper = ParityMapper()
elif mapper_type == 'JordanWignerMapper':
mapper = JordanWignerMapper()
elif mapper_type == 'BravyiKitaevMapper':
mapper = BravyiKitaevMapper()
converter = QubitConverter(mapper=mapper, two_qubit_reduction=True)
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles) | _____no_output_____ | Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
4. Initial stateAs we described in the Theory section, a good initial state in chemistry is the HF state (i.e. $|\Psi_{HF} \rangle = |0101 \rangle$). We can initialize it as follows: | from qiskit_nature.circuit.library import HartreeFock
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
print(init_state) | βββββ
q_0: β€ X β
βββββ
q_1: βββββ
βββββ
q_2: β€ X β
βββββ
q_3: βββββ
| Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
5. AnsatzOne of the most important choices is the quantum circuit that you choose to approximate your ground state.Here is the example of qiskit circuit library that contains many possibilities for making your own circuit. | from qiskit.circuit.library import TwoLocal
from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD
# Choose the ansatz
ansatz_type = "TwoLocal"
# Parameters for q-UCC antatze
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
# Put arguments for twolocal
if ansatz_type == "TwoLocal":
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ['ry', 'rz']
# Entangling gates
entanglement_blocks = 'cx'
# How the qubits are entangled
entanglement = 'full'
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
repetitions = 3
# Skip the final rotation_blocks layer
skip_final_rotation_layer = True
ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions,
entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
elif ansatz_type == "UCCSD":
ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "PUCCD":
ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "SUCCD":
ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "Custom":
# Example of how to write your own circuit
from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister
# Define the variational parameter
theta = Parameter('a')
n = qubit_op.num_qubits
# Make an empty quantum circuit
qc = QuantumCircuit(qubit_op.num_qubits)
qubit_label = 0
# Place a Hadamard gate
qc.h(qubit_label)
# Visual separator
qc.barrier()
# rz rotations on all qubits
qc.ry(theta, range(n))
qc.rz(theta, range(n))
ansatz = qc
ansatz.compose(init_state, front=True, inplace=True)
print(ansatz) | βββββ ββββββββββββββββββββββββ ββββββββββββΒ»
q_0: ββββ€ X ββββββ€ RY(ΞΈ[0]) ββ€ RZ(ΞΈ[4]) ββββ βββββ ββββββββββ βββ€ RY(ΞΈ[8]) βΒ»
ββββ΄ββββ΄ββββββββββββββββ€βββββββββββββββ΄ββ β β ββββββββββββΒ»
q_1: β€ RY(ΞΈ[1]) ββ€ RZ(ΞΈ[5]) ββββββββββββββ€ X ββββΌβββββ βββββΌββββββββ ββββββΒ»
ββββ¬ββββ¬ββββββββββββββββ€ββββββββββββββββββββ΄βββββ΄ββ β β Β»
q_2: ββββ€ X ββββββ€ RY(ΞΈ[2]) ββ€ RZ(ΞΈ[6]) βββββββ€ X ββ€ X ββββΌββββββββΌββββββΒ»
ββββ΄ββββ΄ββββββββββββββββ€ββββββββββββ βββββββββββββ΄ββ βββ΄ββ Β»
q_3: β€ RY(ΞΈ[3]) ββ€ RZ(ΞΈ[7]) βββββββββββββββββββββββββββββ€ X βββββ€ X βββββΒ»
ββββββββββββββββββββββββ βββββ βββββ Β»
Β« βββββββββββββ βββββββββββββΒ»
Β«q_0: β€ RZ(ΞΈ[12]) βββββββββββββββββββββ βββββββββ ββββββββββ βββ€ RY(ΞΈ[16]) βΒ»
Β« ββ¬βββββββββββ€βββββββββββββ βββ΄ββ β β βββββββββββββΒ»
Β«q_1: ββ€ RY(ΞΈ[9]) ββ€ RZ(ΞΈ[13]) ββββββ€ X ββββββββΌβββββ βββββΌβββββββββ ββββββΒ»
Β« βββββββββββββββββββββββββ€βββββ΄ββββ΄βββββββ΄βββββ΄ββ β β Β»
Β«q_2: βββββββ βββββββ€ RY(ΞΈ[10]) ββ€ RZ(ΞΈ[14]) ββ€ X ββ€ X ββββΌβββββββββΌββββββΒ»
Β« βββ΄ββ βββββββββββββ€βββββββββββββ€βββββββββββββ΄ββ βββ΄ββ Β»
Β«q_3: βββββ€ X ββββββ€ RY(ΞΈ[11]) ββ€ RZ(ΞΈ[15]) ββββββββββββ€ X ββββββ€ X βββββΒ»
Β« βββββ ββββββββββββββββββββββββββ βββββ βββββ Β»
Β« βββββββββββββ
Β«q_0: β€ RZ(ΞΈ[20]) βββββββββββββββββββββ βββββββββ ββββββββββ ββββββββββββ
Β« βββββββββββββ€βββββββββββββ βββ΄ββ β β
Β«q_1: β€ RY(ΞΈ[17]) ββ€ RZ(ΞΈ[21]) ββββββ€ X ββββββββΌβββββ βββββΌβββββ βββββββ
Β« ββββββββββββββββββββββββββ€βββββ΄ββββ΄βββββββ΄βββββ΄ββ β β
Β«q_2: βββββββ βββββββ€ RY(ΞΈ[18]) ββ€ RZ(ΞΈ[22]) ββ€ X ββ€ X ββββΌβββββΌβββββ ββ
Β« βββ΄ββ βββββββββββββ€βββββββββββββ€βββββββββββββ΄βββββ΄βββββ΄ββ
Β«q_3: βββββ€ X ββββββ€ RY(ΞΈ[19]) ββ€ RZ(ΞΈ[23]) ββββββββββββ€ X ββ€ X ββ€ X β
Β« βββββ ββββββββββββββββββββββββββ βββββββββββββββ
| Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
6. BackendThis is where you specify the simulator or device where you want to run your algorithm.We will focus on the `statevector_simulator` in this challenge. | from qiskit import Aer
backend = Aer.get_backend('statevector_simulator') | _____no_output_____ | Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
7. OptimizerThe optimizer guides the evolution of the parameters of the ansatz so it is very important to investigate the energy convergence as it would define the number of measurements that have to be performed on the QPU.A clever choice might reduce drastically the number of needed energy evaluations. | from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP
optimizer_type = 'COBYLA'
# You may want to tune the parameters
# of each optimizer, here the defaults are used
if optimizer_type == 'COBYLA':
optimizer = COBYLA(maxiter=500)
elif optimizer_type == 'L_BFGS_B':
optimizer = L_BFGS_B(maxfun=500)
elif optimizer_type == 'SPSA':
optimizer = SPSA(maxiter=500)
elif optimizer_type == 'SLSQP':
optimizer = SLSQP(maxiter=500) | _____no_output_____ | Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
8. Exact eigensolverFor learning purposes, we can solve the problem exactly with the exact diagonalization of the Hamiltonian matrix so we know where to aim with VQE.Of course, the dimensions of this matrix scale exponentially in the number of molecular orbitals so you can try doing this for a large molecule of your choice and see how slow this becomes. For very large systems you would run out of memory trying to store their wavefunctions. | from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact electronic energy", exact_energy)
print(result_exact)
# The targeted electronic energy for H2 is -1.85336 Ha
# Check with your VQE result. | Exact electronic energy -1.8533636186720424
=== GROUND STATE ENERGY ===
* Electronic ground state energy (Hartree): -1.853363618672
- computed part: -1.853363618672
~ Nuclear repulsion energy (Hartree): 0.716072003951
> Total ground state energy (Hartree): -1.137291614721
=== MEASURED OBSERVABLES ===
0: # Particles: 2.000 S: 0.000 S^2: 0.000 M: 0.000
=== DIPOLE MOMENTS ===
~ Nuclear dipole moment (a.u.): [0.0 0.0 1.39650761]
0:
* Electronic dipole moment (a.u.): [0.0 0.0 1.39650761]
- computed part: [0.0 0.0 1.39650761]
> Dipole moment (a.u.): [0.0 0.0 0.0] Total: 0.
(debye): [0.0 0.0 0.00000001] Total: 0.00000001
| Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
9. VQE and initial parameters for the ansatzNow we can import the VQE class and run the algorithm. | from qiskit.algorithms import VQE
from IPython.display import display, clear_output
# Print and save the data in lists
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# We choose a fixed small displacement
# So all participants start from similar starting point
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result) | OrderedDict([ ('aux_operator_eigenvalues', None),
('cost_function_evals', 500),
( 'eigenstate',
array([ 1.72642837e-07+8.50403202e-06j, -1.78929971e-04-1.81951230e-05j,
-3.69523167e-06-1.34495890e-05j, -2.10924080e-04+1.77214969e-04j,
4.99046244e-06-2.06613556e-06j, 6.74694778e-01+7.29486483e-01j,
-1.24182388e-03+6.51317093e-04j, 1.41053276e-08-5.68212233e-09j,
-5.49430298e-09-7.34506476e-08j, 1.14565968e-03+4.24895118e-04j,
-7.62192188e-02-8.26042985e-02j, -3.51865303e-05+3.22859610e-05j,
2.63445328e-05-1.93206120e-05j, 3.26507775e-05+1.21079129e-04j,
-1.86705090e-05-8.50728883e-06j, -2.68804234e-05-2.39896274e-05j])),
('eigenvalue', -1.8533611875222795),
( 'optimal_parameters',
{ ParameterVectorElement(ΞΈ[8]): -9.823669287699566e-06,
ParameterVectorElement(ΞΈ[0]): 0.22528384153894793,
ParameterVectorElement(ΞΈ[4]): 0.22876817609748787,
ParameterVectorElement(ΞΈ[1]): 0.0001628358952911501,
ParameterVectorElement(ΞΈ[12]): 1.024655074911573,
ParameterVectorElement(ΞΈ[18]): -0.012431877150681298,
ParameterVectorElement(ΞΈ[17]): 3.1420635330869464,
ParameterVectorElement(ΞΈ[23]): 0.7496785259549152,
ParameterVectorElement(ΞΈ[9]): -7.206149843099831e-05,
ParameterVectorElement(ΞΈ[3]): 0.7058281857030912,
ParameterVectorElement(ΞΈ[22]): 1.4067582441060544,
ParameterVectorElement(ΞΈ[10]): -0.004145381016989967,
ParameterVectorElement(ΞΈ[13]): 0.17466072749351547,
ParameterVectorElement(ΞΈ[14]): 0.05501516914713419,
ParameterVectorElement(ΞΈ[2]): -0.008902430245345148,
ParameterVectorElement(ΞΈ[20]): -0.022492568309320043,
ParameterVectorElement(ΞΈ[5]): -0.7648677076914686,
ParameterVectorElement(ΞΈ[21]): 1.3225524343419832,
ParameterVectorElement(ΞΈ[15]): -0.055011768252079846,
ParameterVectorElement(ΞΈ[7]): -0.06860900246768789,
ParameterVectorElement(ΞΈ[16]): 0.0005545694957856072,
ParameterVectorElement(ΞΈ[11]): -0.23452291161472869,
ParameterVectorElement(ΞΈ[19]): -0.9396567959736827,
ParameterVectorElement(ΞΈ[6]): 0.23743855054021837}),
( 'optimal_point',
array([ 2.25283842e-01, -4.14538102e-03, -2.34522912e-01, 1.02465507e+00,
1.74660727e-01, 5.50151691e-02, -5.50117683e-02, 5.54569496e-04,
3.14206353e+00, -1.24318772e-02, -9.39656796e-01, 1.62835895e-04,
-2.24925683e-02, 1.32255243e+00, 1.40675824e+00, 7.49678526e-01,
-8.90243025e-03, 7.05828186e-01, 2.28768176e-01, -7.64867708e-01,
2.37438551e-01, -6.86090025e-02, -9.82366929e-06, -7.20614984e-05])),
('optimal_value', -1.8533611875222795),
('optimizer_evals', 500),
('optimizer_time', 6.67009711265564)])
| Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
9. Scoring function We need to judge how good are your VQE simulations, your choice of ansatz/optimizer.For this, we implemented the following simple scoring function:$$ score = N_{CNOT}$$where $N_{CNOT}$ is the number of CNOTs. But you have to reach the chemical accuracy which is $\delta E_{chem} = 0.004$ Ha $= 4$ mHa, which may be hard to reach depending on the problem. You have to reach the accuracy we set in a minimal number of CNOTs to win the challenge. The lower the score the better! | # Store results in a dictionary
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import Unroller
# Unroller transpile your circuit into CNOTs and U gates
pass_ = Unroller(['u', 'cx'])
pm = PassManager(pass_)
ansatz_tp = pm.run(ansatz)
cnots = ansatz_tp.count_ops()['cx']
score = cnots
accuracy_threshold = 4.0 # in mHa
energy = result.optimal_value
if ansatz_type == "TwoLocal":
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': rotation_blocks,
'entanglement_blocks': entanglement_blocks,
'entanglement': entanglement,
'repetitions': repetitions,
'skip_final_rotation_layer': skip_final_rotation_layer,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
else:
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': None,
'entanglement_blocks': None,
'entanglement': None,
'repetitions': None,
'skip_final_rotation_layer': None,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
# Plot the results
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('Iterations')
ax.set_ylabel('Energy')
ax.grid()
fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}')
plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}")
ax.plot(counts, values)
ax.axhline(exact_energy, linestyle='--')
fig_title = f"\
{result_dict['optimizer']}-\
{result_dict['mapping']}-\
{result_dict['ansatz']}-\
Energy({result_dict['energy (Ha)']:.3f})-\
Score({result_dict['score']:.0f})\
.png"
fig.savefig(fig_title, dpi=300)
# Display and save the data
import pandas as pd
import os.path
filename = 'results_h2.csv'
if os.path.isfile(filename):
result_df = pd.read_csv(filename)
result_df = result_df.append([result_dict])
else:
result_df = pd.DataFrame.from_dict([result_dict])
result_df.to_csv(filename)
result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks',
'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']] | _____no_output_____ | Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
Tutorial questions 2 Experiment with all the parameters and then:1. Can you find your best (best score) heuristic ansatz (by modifying parameters of `TwoLocal` ansatz) and optimizer?2. Can you find your best q-UCC ansatz (choose among `UCCSD, PUCCD or SUCCD` ansatzes) and optimizer?3. In the cell where we define the ansatz, can you modify the `Custom` ansatz by placing gates yourself to write a better circuit than your `TwoLocal` circuit? For each question, give `ansatz` objects.Remember, you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \leq 0.004 $ Ha $= 4$ mHa. Part 2: Final Challenge - VQE for LiH molecule In this part, you will simulate LiH molecule using the STO-3G basis with the PySCF driver. Goal Experiment with all the parameters and then find your best ansatz. You can be as creative as you want!For each question, give `ansatz` objects as for Part 1. Your final score will be based only on Part 2. Be aware that the system is larger now. Work out how many qubits you would need for this system by retrieving the number of spin-orbitals. Reducing the problem sizeYou might want to reduce the number of qubits for your simulation:- you could freeze the core electrons that do not contribute significantly to chemistry and consider only the valence electrons. QiskitΒ already has this functionality implemented. So inspect the different transformers in `qiskit_nature.transformers` and find the one that performs the freeze core approximation.- you could use `ParityMapper` with `two_qubit_reduction=True` to eliminate 2 qubits.- you could reduce the number of qubits by inspecting the symmetries of your Hamiltonian. Find a way to use `Z2Symmetries` in Qiskit. Custom ansatz You might want to explore the ideas proposed in [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [H. L. Tang *et al.*,2019](https://arxiv.org/abs/1911.10205), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). You can even get try machine learning algorithms to generate best ansatz circuits. Setup the simulationLet's now run the Hartree-Fock calculation and the rest is up to you!Attention We give below the `driver`, the `initial_point`, the `initial_state` that should remain as given.You are free then to explore all other things available in Qiskit.So you have to start from this initial point (all parameters set to 0.01): `initial_point = [0.01] * len(ansatz.ordered_parameters)` or`initial_point = [0.01] * ansatz.num_parameters`and your initial state has to be the Hartree-Fock state: `init_state = HartreeFock(num_spin_orbitals, num_particles, converter)` For each question, give `ansatz` object.Remember you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \leq 0.004 $ Ha $= 4$ mHa. | from qiskit_nature.drivers import PySCFDriver
molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
from qiskit_nature.transformers import FreezeCoreTransformer, ActiveSpaceTransformer
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
problem = ElectronicStructureProblem(driver, q_molecule_transformers=[FreezeCoreTransformer(remove_orbitals=[4, 3])])
# Generate the second-quantized operators
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
# Setup the mapper and qubit converter
mapper_type = 'ParityMapper'
if mapper_type == 'ParityMapper':
mapper = ParityMapper()
converter = QubitConverter(mapper=mapper, two_qubit_reduction=True, z2symmetry_reduction=[1, 1])
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles)
from qiskit_nature.circuit.library import HartreeFock
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
print(init_state)
from qiskit import Aer
backend = Aer.get_backend('statevector_simulator')
from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact electronic energy", exact_energy)
print(result_exact)
# LiH -> Exact electronic energy -1.089782396348737 --> -8.90847269193 Ha
# Check with your VQE result.
# WRITE YOUR CODE BETWEEN THESE LINES - START
from qiskit.circuit.library import TwoLocal
# Choose the ansatz
ansatz_type = "TwoLocal"
# Parameters for q-UCC antatze
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
# Put arguments for twolocal
if ansatz_type == "TwoLocal":
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ['ry', 'rz', 'rx']
# Entangling gates
entanglement_blocks = 'cx'
# How the qubits are entangled
entanglement = 'linear'
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
repetitions = 1
# Skip the final rotation_blocks layer
skip_final_rotation_layer = False
ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions,
entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
print(ansatz)
from qiskit.algorithms.optimizers import COBYLA
optimizer_type = 'COBYLA'
if optimizer_type == 'COBYLA':
optimizer = COBYLA(maxiter=4000, disp=True)
from qiskit.algorithms import VQE
from IPython.display import display, clear_output
# Print and save the data in lists
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result)
# WRITE YOUR CODE BETWEEN THESE LINES - END
# Check your answer using following code
from qc_grader import grade_ex5
freeze_core = True # change to True if you freezed core electrons
grade_ex5(ansatz,qubit_op,result,freeze_core)
#-8.90314 -> -1.08444 -> linear -> 1 -> False -> 5 quibts -> 10k
# Submit your answer. You can re-submit at any time.
from qc_grader import submit_ex5
submit_ex5(ansatz,qubit_op,result,freeze_core) | Submitting your answer for ex5. Please wait...
Success π! Your answer has been submitted.
| Apache-2.0 | solutions by participants/ex5/ex5-MichaelRollin-3cnot-?mHa-24params.ipynb | fazliberkordek/ibm-quantum-challenge-2021 |
Computer Vision Nanodegree Project: Image Captioning---In this notebook, you will learn how to load and pre-process data from the [COCO dataset](http://cocodataset.org/home). You will also design a CNN-RNN model for automatically generating image captions.Note that **any amendments that you make to this notebook will not be graded**. However, you will use the instructions provided in **Step 3** and **Step 4** to implement your own CNN encoder and RNN decoder by making amendments to the **models.py** file provided as part of this project. Your **models.py** file **will be graded**. Feel free to use the links below to navigate the notebook:- [Step 1](step1): Explore the Data Loader- [Step 2](step2): Use the Data Loader to Obtain Batches- [Step 3](step3): Experiment with the CNN Encoder- [Step 4](step4): Implement the RNN Decoder Step 1: Explore the Data LoaderWe have already written a [data loader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader) that you can use to load the COCO dataset in batches. In the code cell below, you will initialize the data loader by using the `get_loader` function in **data_loader.py**. > For this project, you are not permitted to change the **data_loader.py** file, which must be used as-is.The `get_loader` function takes as input a number of arguments that can be explored in **data_loader.py**. Take the time to explore these arguments now by opening **data_loader.py** in a new window. Most of the arguments must be left at their default values, and you are only allowed to amend the values of the arguments below:1. **`transform`** - an [image transform](http://pytorch.org/docs/master/torchvision/transforms.html) specifying how to pre-process the images and convert them to PyTorch tensors before using them as input to the CNN encoder. For now, you are encouraged to keep the transform as provided in `transform_train`. You will have the opportunity later to choose your own image transform to pre-process the COCO images.2. **`mode`** - one of `'train'` (loads the training data in batches) or `'test'` (for the test data). We will say that the data loader is in training or test mode, respectively. While following the instructions in this notebook, please keep the data loader in training mode by setting `mode='train'`.3. **`batch_size`** - determines the batch size. When training the model, this is number of image-caption pairs used to amend the model weights in each training step.4. **`vocab_threshold`** - the total number of times that a word must appear in the in the training captions before it is used as part of the vocabulary. Words that have fewer than `vocab_threshold` occurrences in the training captions are considered unknown words. 5. **`vocab_from_file`** - a Boolean that decides whether to load the vocabulary from file. We will describe the `vocab_threshold` and `vocab_from_file` arguments in more detail soon. For now, run the code cell below. Be patient - it may take a couple of minutes to run! | import sys
sys.path.append('./cocoapi/PythonAPI')
from pycocotools.coco import COCO
!pip install nltk
import nltk
nltk.download('punkt')
from data_loader import get_loader
from torchvision import transforms
cocoapi_loc = '/mnt/data2/Project/Image-Captioning/'
# Define a transform to pre-process the training images.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Set the minimum word count threshold.
vocab_threshold = 5
# Specify the batch size.
batch_size = 10
# Obtain the data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
cocoapi_loc=cocoapi_loc,
vocab_from_file=False) | Requirement already satisfied: nltk in /home/hvlpr/anaconda3/lib/python3.7/site-packages (3.4.1)
Requirement already satisfied: six in /home/hvlpr/anaconda3/lib/python3.7/site-packages (from nltk) (1.12.0)
loading annotations into memory...
Done (t=0.46s)
creating index...
| MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
When you ran the code cell above, the data loader was stored in the variable `data_loader`. You can access the corresponding dataset as `data_loader.dataset`. This dataset is an instance of the `CoCoDataset` class in **data_loader.py**. If you are unfamiliar with data loaders and datasets, you are encouraged to review [this PyTorch tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). Exploring the `__getitem__` MethodThe `__getitem__` method in the `CoCoDataset` class determines how an image-caption pair is pre-processed before being incorporated into a batch. This is true for all `Dataset` classes in PyTorch; if this is unfamiliar to you, please review [the tutorial linked above](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). When the data loader is in training mode, this method begins by first obtaining the filename (`path`) of a training image and its corresponding caption (`caption`). Image Pre-Processing Image pre-processing is relatively straightforward (from the `__getitem__` method in the `CoCoDataset` class):```python Convert image to tensor and pre-process using transformimage = Image.open(os.path.join(self.img_folder, path)).convert('RGB')image = self.transform(image)```After loading the image in the training folder with name `path`, the image is pre-processed using the same transform (`transform_train`) that was supplied when instantiating the data loader. Caption Pre-Processing The captions also need to be pre-processed and prepped for training. In this example, for generating captions, we are aiming to create a model that predicts the next token of a sentence from previous tokens, so we turn the caption associated with any image into a list of tokenized words, before casting it to a PyTorch tensor that we can use to train the network.To understand in more detail how COCO captions are pre-processed, we'll first need to take a look at the `vocab` instance variable of the `CoCoDataset` class. The code snippet below is pulled from the `__init__` method of the `CoCoDataset` class:```pythondef __init__(self, transform, mode, batch_size, vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file, img_folder): ... self.vocab = Vocabulary(vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file) ...```From the code snippet above, you can see that `data_loader.dataset.vocab` is an instance of the `Vocabulary` class from **vocabulary.py**. Take the time now to verify this for yourself by looking at the full code in **data_loader.py**. We use this instance to pre-process the COCO captions (from the `__getitem__` method in the `CoCoDataset` class):```python Convert caption to tensor of word ids.tokens = nltk.tokenize.word_tokenize(str(caption).lower()) line 1caption = [] line 2caption.append(self.vocab(self.vocab.start_word)) line 3caption.extend([self.vocab(token) for token in tokens]) line 4caption.append(self.vocab(self.vocab.end_word)) line 5caption = torch.Tensor(caption).long() line 6```As you will see soon, this code converts any string-valued caption to a list of integers, before casting it to a PyTorch tensor. To see how this code works, we'll apply it to the sample caption in the next code cell. | sample_caption = 'A person doing a trick on a rail while riding a skateboard.' | _____no_output_____ | MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
In **`line 1`** of the code snippet, every letter in the caption is converted to lowercase, and the [`nltk.tokenize.word_tokenize`](http://www.nltk.org/) function is used to obtain a list of string-valued tokens. Run the next code cell to visualize the effect on `sample_caption`. | import nltk
sample_tokens = nltk.tokenize.word_tokenize(str(sample_caption).lower())
print(sample_tokens) | ['a', 'person', 'doing', 'a', 'trick', 'on', 'a', 'rail', 'while', 'riding', 'a', 'skateboard', '.']
| MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
In **`line 2`** and **`line 3`** we initialize an empty list and append an integer to mark the start of a caption. The [paper](https://arxiv.org/pdf/1411.4555.pdf) that you are encouraged to implement uses a special start word (and a special end word, which we'll examine below) to mark the beginning (and end) of a caption.This special start word (`""`) is decided when instantiating the data loader and is passed as a parameter (`start_word`). You are **required** to keep this parameter at its default value (`start_word=""`).As you will see below, the integer `0` is always used to mark the start of a caption. | sample_caption = []
start_word = data_loader.dataset.vocab.start_word
print('Special start word:', start_word)
sample_caption.append(data_loader.dataset.vocab(start_word))
print(sample_caption) | Special start word: <start>
[0]
| MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
In **`line 4`**, we continue the list by adding integers that correspond to each of the tokens in the caption. | sample_caption.extend([data_loader.dataset.vocab(token) for token in sample_tokens])
print(sample_caption) | [0, 3, 98, 754, 3, 396, 39, 3, 1009, 207, 139, 3, 753, 18]
| MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
In **`line 5`**, we append a final integer to mark the end of the caption. Identical to the case of the special start word (above), the special end word (`""`) is decided when instantiating the data loader and is passed as a parameter (`end_word`). You are **required** to keep this parameter at its default value (`end_word=""`).As you will see below, the integer `1` is always used to mark the end of a caption. | end_word = data_loader.dataset.vocab.end_word
print('Special end word:', end_word)
sample_caption.append(data_loader.dataset.vocab(end_word))
print(sample_caption) | Special end word: <end>
[0, 3, 98, 754, 3, 396, 39, 3, 1009, 207, 139, 3, 753, 18, 1]
| MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
Finally, in **`line 6`**, we convert the list of integers to a PyTorch tensor and cast it to [long type](http://pytorch.org/docs/master/tensors.htmltorch.Tensor.long). You can read more about the different types of PyTorch tensors on the [website](http://pytorch.org/docs/master/tensors.html). | import torch
sample_caption = torch.Tensor(sample_caption).long()
print(sample_caption) | tensor([ 0, 3, 98, 754, 3, 396, 39, 3, 1009, 207, 139, 3,
753, 18, 1])
| MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
And that's it! In summary, any caption is converted to a list of tokens, with _special_ start and end tokens marking the beginning and end of the sentence:```[, 'a', 'person', 'doing', 'a', 'trick', 'while', 'riding', 'a', 'skateboard', '.', ]```This list of tokens is then turned into a list of integers, where every distinct word in the vocabulary has an associated integer value:```[0, 3, 98, 754, 3, 396, 207, 139, 3, 753, 18, 1]```Finally, this list is converted to a PyTorch tensor. All of the captions in the COCO dataset are pre-processed using this same procedure from **`lines 1-6`** described above. As you saw, in order to convert a token to its corresponding integer, we call `data_loader.dataset.vocab` as a function. The details of how this call works can be explored in the `__call__` method in the `Vocabulary` class in **vocabulary.py**. ```pythondef __call__(self, word): if not word in self.word2idx: return self.word2idx[self.unk_word] return self.word2idx[word]```The `word2idx` instance variable is a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.htmldictionaries) that is indexed by string-valued keys (mostly tokens obtained from training captions). For each key, the corresponding value is the integer that the token is mapped to in the pre-processing step.Use the code cell below to view a subset of this dictionary. | # Preview the word2idx dictionary.
dict(list(data_loader.dataset.vocab.word2idx.items())[:10]) | _____no_output_____ | MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
We also print the total number of keys. | # Print the total number of keys in the word2idx dictionary.
print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab)) | Total number of tokens in vocabulary: 8856
| MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
As you will see if you examine the code in **vocabulary.py**, the `word2idx` dictionary is created by looping over the captions in the training dataset. If a token appears no less than `vocab_threshold` times in the training set, then it is added as a key to the dictionary and assigned a corresponding unique integer. You will have the option later to amend the `vocab_threshold` argument when instantiating your data loader. Note that in general, **smaller** values for `vocab_threshold` yield a **larger** number of tokens in the vocabulary. You are encouraged to check this for yourself in the next code cell by decreasing the value of `vocab_threshold` before creating a new data loader. | # Modify the minimum word count threshold.
vocab_threshold = 4
# Obtain the data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
cocoapi_loc=cocoapi_loc,
vocab_from_file=False)
# Print the total number of keys in the word2idx dictionary.
print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab)) | Total number of tokens in vocabulary: 9955
| MIT | 1_Preliminaries.ipynb | lanhhv84/Image-Captioning |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.