markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
There are also a few special keys in the `word2idx` dictionary. You are already familiar with the special start word (`""`) and special end word (`""`). There is one more special token, corresponding to unknown words (`""`). All tokens that don't appear anywhere in the `word2idx` dictionary are considered unknown words. In the pre-processing step, any unknown tokens are mapped to the integer `2`.
unk_word = data_loader.dataset.vocab.unk_word print('Special unknown word:', unk_word) print('All unknown words are mapped to this integer:', data_loader.dataset.vocab(unk_word))
Special unknown word: <unk> All unknown words are mapped to this integer: 2
MIT
1_Preliminaries.ipynb
lanhhv84/Image-Captioning
Check this for yourself below, by pre-processing the provided nonsense words that never appear in the training captions.
print(data_loader.dataset.vocab('jfkafejw')) print(data_loader.dataset.vocab('ieowoqjf'))
2 2
MIT
1_Preliminaries.ipynb
lanhhv84/Image-Captioning
The final thing to mention is the `vocab_from_file` argument that is supplied when creating a data loader. To understand this argument, note that when you create a new data loader, the vocabulary (`data_loader.dataset.vocab`) is saved as a [pickle](https://docs.python.org/3/library/pickle.html) file in the project folder, with filename `vocab.pkl`.If you are still tweaking the value of the `vocab_threshold` argument, you **must** set `vocab_from_file=False` to have your changes take effect. But once you are happy with the value that you have chosen for the `vocab_threshold` argument, you need only run the data loader *one more time* with your chosen `vocab_threshold` to save the new vocabulary to file. Then, you can henceforth set `vocab_from_file=True` to load the vocabulary from file and speed the instantiation of the data loader. Note that building the vocabulary from scratch is the most time-consuming part of instantiating the data loader, and so you are strongly encouraged to set `vocab_from_file=True` as soon as you are able.Note that if `vocab_from_file=True`, then any supplied argument for `vocab_threshold` when instantiating the data loader is completely ignored.
# Obtain the data loader (from file). Note that it runs much faster than before! data_loader = get_loader(transform=transform_train, mode='train', batch_size=batch_size, cocoapi_loc=cocoapi_loc, vocab_from_file=True)
Vocabulary successfully loaded from vocab.pkl file! loading annotations into memory...
MIT
1_Preliminaries.ipynb
lanhhv84/Image-Captioning
In the next section, you will learn how to use the data loader to obtain batches of training data. Step 2: Use the Data Loader to Obtain BatchesThe captions in the dataset vary greatly in length. You can see this by examining `data_loader.dataset.caption_lengths`, a Python list with one entry for each training caption (where the value stores the length of the corresponding caption). In the code cell below, we use this list to print the total number of captions in the training data with each length. As you will see below, the majority of captions have length 10. Likewise, very short and very long captions are quite rare.
from collections import Counter # Tally the total number of training captions with each length. counter = Counter(data_loader.dataset.caption_lengths) lengths = sorted(counter.items(), key=lambda pair: pair[1], reverse=True) for value, count in lengths: print('value: %2d --- count: %5d' % (value, count))
value: 10 --- count: 86332 value: 11 --- count: 79945 value: 9 --- count: 71935 value: 12 --- count: 57639 value: 13 --- count: 37648 value: 14 --- count: 22335 value: 8 --- count: 20769 value: 15 --- count: 12842 value: 16 --- count: 7729 value: 17 --- count: 4842 value: 18 --- count: 3103 value: 19 --- count: 2015 value: 7 --- count: 1597 value: 20 --- count: 1451 value: 21 --- count: 999 value: 22 --- count: 683 value: 23 --- count: 534 value: 24 --- count: 383 value: 25 --- count: 277 value: 26 --- count: 215 value: 27 --- count: 159 value: 28 --- count: 115 value: 29 --- count: 86 value: 30 --- count: 58 value: 31 --- count: 49 value: 32 --- count: 44 value: 34 --- count: 39 value: 37 --- count: 32 value: 33 --- count: 31 value: 35 --- count: 31 value: 36 --- count: 26 value: 38 --- count: 18 value: 39 --- count: 18 value: 43 --- count: 16 value: 44 --- count: 16 value: 48 --- count: 12 value: 45 --- count: 11 value: 42 --- count: 10 value: 40 --- count: 9 value: 49 --- count: 9 value: 46 --- count: 9 value: 47 --- count: 7 value: 50 --- count: 6 value: 51 --- count: 6 value: 41 --- count: 6 value: 52 --- count: 5 value: 54 --- count: 3 value: 56 --- count: 2 value: 6 --- count: 2 value: 53 --- count: 2 value: 55 --- count: 2 value: 57 --- count: 1
MIT
1_Preliminaries.ipynb
lanhhv84/Image-Captioning
To generate batches of training data, we begin by first sampling a caption length (where the probability that any length is drawn is proportional to the number of captions with that length in the dataset). Then, we retrieve a batch of size `batch_size` of image-caption pairs, where all captions have the sampled length. This approach for assembling batches matches the procedure in [this paper](https://arxiv.org/pdf/1502.03044.pdf) and has been shown to be computationally efficient without degrading performance.Run the code cell below to generate a batch. The `get_train_indices` method in the `CoCoDataset` class first samples a caption length, and then samples `batch_size` indices corresponding to training data points with captions of that length. These indices are stored below in `indices`.These indices are supplied to the data loader, which then is used to retrieve the corresponding data points. The pre-processed images and captions in the batch are stored in `images` and `captions`.
import numpy as np import torch.utils.data as data # Randomly sample a caption length, and sample indices with that length. indices = data_loader.dataset.get_train_indices() print('sampled indices:', indices) # Create and assign a batch sampler to retrieve a batch with the sampled indices. new_sampler = data.sampler.SubsetRandomSampler(indices=indices) data_loader.batch_sampler.sampler = new_sampler # Obtain the batch. images, captions = next(iter(data_loader)) print('images.shape:', images.shape) print('captions.shape:', captions.shape) # (Optional) Uncomment the lines of code below to print the pre-processed images and captions. # print('images:', images) # print('captions:', captions)
sampled indices: [233186, 219334, 248528, 332607, 300925, 24377, 336380, 59426, 23758, 306722] images.shape: torch.Size([10, 3, 224, 224]) captions.shape: torch.Size([10, 10])
MIT
1_Preliminaries.ipynb
lanhhv84/Image-Captioning
Each time you run the code cell above, a different caption length is sampled, and a different batch of training data is returned. Run the code cell multiple times to check this out!You will train your model in the next notebook in this sequence (**2_Training.ipynb**). This code for generating training batches will be provided to you.> Before moving to the next notebook in the sequence (**2_Training.ipynb**), you are strongly encouraged to take the time to become very familiar with the code in **data_loader.py** and **vocabulary.py**. **Step 1** and **Step 2** of this notebook are designed to help facilitate a basic introduction and guide your understanding. However, our description is not exhaustive, and it is up to you (as part of the project) to learn how to best utilize these files to complete the project. __You should NOT amend any of the code in either *data_loader.py* or *vocabulary.py*.__In the next steps, we focus on learning how to specify a CNN-RNN architecture in PyTorch, towards the goal of image captioning. Step 3: Experiment with the CNN EncoderRun the code cell below to import `EncoderCNN` and `DecoderRNN` from **model.py**.
# Watch for any changes in model.py, and re-load it automatically. %load_ext autoreload %autoreload 2 # Import EncoderCNN and DecoderRNN. from model import EncoderCNN, DecoderRNN %reload_ext model
_____no_output_____
MIT
1_Preliminaries.ipynb
lanhhv84/Image-Captioning
In the next code cell we define a `device` that you will use move PyTorch tensors to GPU (if CUDA is available). Run this code cell before continuing.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
_____no_output_____
MIT
1_Preliminaries.ipynb
lanhhv84/Image-Captioning
Run the code cell below to instantiate the CNN encoder in `encoder`. The pre-processed images from the batch in **Step 2** of this notebook are then passed through the encoder, and the output is stored in `features`.
# Specify the dimensionality of the image embedding. embed_size = 256 #-#-#-# Do NOT modify the code below this line. #-#-#-# # Initialize the encoder. (Optional: Add additional arguments if necessary.) encoder = EncoderCNN(embed_size) # Move the encoder to GPU if CUDA is available. encoder.to(device) # Move last batch of images (from Step 2) to GPU if CUDA is available. images = images.to(device) # Pass the images through the encoder. features = encoder(images) print('type(features):', type(features)) print('features.shape:', features.shape) # Check that your encoder satisfies some requirements of the project! :D assert type(features)==torch.Tensor, "Encoder output needs to be a PyTorch Tensor." assert (features.shape[0]==batch_size) & (features.shape[1]==embed_size), "The shape of the encoder output is incorrect."
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /home/hvlpr/.cache/torch/checkpoints/resnet50-19c8e357.pth 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 102502400/102502400 [00:09<00:00, 11202810.52it/s]
MIT
1_Preliminaries.ipynb
lanhhv84/Image-Captioning
The encoder that we provide to you uses the pre-trained ResNet-50 architecture (with the final fully-connected layer removed) to extract features from a batch of pre-processed images. The output is then flattened to a vector, before being passed through a `Linear` layer to transform the feature vector to have the same size as the word embedding.![Encoder](images/encoder.png)You are welcome (and encouraged) to amend the encoder in **model.py**, to experiment with other architectures. In particular, consider using a [different pre-trained model architecture](http://pytorch.org/docs/master/torchvision/models.html). You may also like to [add batch normalization](http://pytorch.org/docs/master/nn.htmlnormalization-layers). > You are **not** required to change anything about the encoder.For this project, you **must** incorporate a pre-trained CNN into your encoder. Your `EncoderCNN` class must take `embed_size` as an input argument, which will also correspond to the dimensionality of the input to the RNN decoder that you will implement in Step 4. When you train your model in the next notebook in this sequence (**2_Training.ipynb**), you are welcome to tweak the value of `embed_size`.If you decide to modify the `EncoderCNN` class, save **model.py** and re-execute the code cell above. If the code cell returns an assertion error, then please follow the instructions to modify your code before proceeding. The assert statements ensure that `features` is a PyTorch tensor with shape `[batch_size, embed_size]`. Step 4: Implement the RNN DecoderBefore executing the next code cell, you must write `__init__` and `forward` methods in the `DecoderRNN` class in **model.py**. (Do **not** write the `sample` method yet - you will work with this method when you reach **3_Inference.ipynb**.)> The `__init__` and `forward` methods in the `DecoderRNN` class are the only things that you **need** to modify as part of this notebook. You will write more implementations in the notebooks that appear later in the sequence.Your decoder will be an instance of the `DecoderRNN` class and must accept as input:- the PyTorch tensor `features` containing the embedded image features (outputted in Step 3, when the last batch of images from Step 2 was passed through `encoder`), along with- a PyTorch tensor corresponding to the last batch of captions (`captions`) from Step 2.Note that the way we have written the data loader should simplify your code a bit. In particular, every training batch will contain pre-processed captions where all have the same length (`captions.shape[1]`), so **you do not need to worry about padding**. > While you are encouraged to implement the decoder described in [this paper](https://arxiv.org/pdf/1411.4555.pdf), you are welcome to implement any architecture of your choosing, as long as it uses at least one RNN layer, with hidden dimension `hidden_size`. Although you will test the decoder using the last batch that is currently stored in the notebook, your decoder should be written to accept an arbitrary batch (of embedded image features and pre-processed captions [where all captions have the same length]) as input. ![Decoder](images/decoder.png)In the code cell below, `outputs` should be a PyTorch tensor with size `[batch_size, captions.shape[1], vocab_size]`. Your output should be designed such that `outputs[i,j,k]` contains the model's predicted score, indicating how likely the `j`-th token in the `i`-th caption in the batch is the `k`-th token in the vocabulary. In the next notebook of the sequence (**2_Training.ipynb**), we provide code to supply these scores to the [`torch.nn.CrossEntropyLoss`](http://pytorch.org/docs/master/nn.htmltorch.nn.CrossEntropyLoss) optimizer in PyTorch.
from model import EncoderCNN, DecoderRNN # Specify the number of features in the hidden state of the RNN decoder. hidden_size = 512 #-#-#-# Do NOT modify the code below this line. #-#-#-# # Store the size of the vocabulary. vocab_size = len(data_loader.dataset.vocab) # Initialize the decoder. decoder = DecoderRNN(embed_size, hidden_size, vocab_size) # Move the decoder to GPU if CUDA is available. decoder.to(device) # Move last batch of captions (from Step 1) to GPU if CUDA is available captions = captions.to(device) # Pass the encoder output and captions through the decoder. outputs = decoder(features, captions) print('type(outputs):', type(outputs)) print('outputs.shape:', outputs.shape) # Check that your decoder satisfies some requirements of the project! :D assert type(outputs)==torch.Tensor, "Decoder output needs to be a PyTorch Tensor." assert (outputs.shape[0]==batch_size) & (outputs.shape[1]==captions.shape[1]) & (outputs.shape[2]==vocab_size), "The shape of the decoder output is incorrect."
torch.Size([10, 1, 256])
MIT
1_Preliminaries.ipynb
lanhhv84/Image-Captioning
Was this Cell below what you wanted?
# remove the mean for the c low = (couple_columns[['Energy']] - couple_columns[['Energy']].mean()).min()[0] high = (couple_columns[['Energy']] - couple_columns[['Energy']].mean()).max()[0] plt.scatter(couple_columns[['helix1 phase']], couple_columns[['helix 2 phase']], c=(couple_columns[['Energy']] - couple_columns[['Energy']].mean()), edgecolors='none',vmin = low, vmax = high, cmap = 'Blues', marker = 's',s = 190) plt.xlabel('helix1 phase'); plt.ylabel('helix 2 phase'); low, high np.unique(couple_columns[['helix1 phase']].values) (couple_columns[['Energy']] - couple_columns[['Energy']].mean()).head()
_____no_output_____
MIT
Request/.ipynb_checkpoints/sin-checkpoint.ipynb
mrpal39/Python_Tutorials
Example 2.1 Calculation of Volume of Gas Using Ideal Gas Behaviour
"Question is to calculate the volume of 3pounds of N-Butane gas using ideal gas behaviour" #known mass = 58.123 #lbs temp = 120 #Fdegrees pressure = 60 #psia m =3 #lbs R = 10.73 V= (m/mass)*(R*(temp+460)/pressure) #ft3 print("Volume of Gas using Ideal Gas Behaviour is:", V,"ft3")
Volume of Gas using Ideal Gas Behaviour is: 5.353646577086525 ft3
MIT
Chapter 2 Reservoir Fluid Properties/Examples/Chapter 2 Examples.ipynb
boomitsheth/Reservoir-Engineering-Handbook-
Example 2.2 Calculation of Density of N-Butane
"Question is to calculate the density of N-Butane" den = m/V print("Density of N-Butane is:", den,"lb/ft3")
Density of N-Butane is: 0.5603657164893786 lb/ft3
MIT
Chapter 2 Reservoir Fluid Properties/Examples/Chapter 2 Examples.ipynb
boomitsheth/Reservoir-Engineering-Handbook-
Example 2.3 Calculation of Density & Molecular Weight
"Calcualte apparent molecular weight of gas and gas density, " spg= 0.65 Qg= 1.1 #Mmscf/d pressure= 1500 #psia temp= 150 #Fdegree Molweight= 28.96 * spg #lbs print("Molecular Weight of the gas is:", Molweight,"lbs") Gasden = (pressure * Molweight) / (10.73 * (temp+460)) #lb/ft3 print("Gas Density is:",Gasden,"lbs/ft3")
Molecular Weight of the gas is: 18.824 lbs Gas Density is: 4.3139351901364344 lbs/ft3
MIT
Chapter 2 Reservoir Fluid Properties/Examples/Chapter 2 Examples.ipynb
boomitsheth/Reservoir-Engineering-Handbook-
Example 2.4 Calculation of Specific Gravity and Molecular Weight
component= pd.read_csv("F:\Tarek Ahmed Reservoir Engineering Data\Chapter 2 Reservoir Fluid Properties\example2.4.csv") component Mi= [44.01,16.04,30.07,44.11] component[ 'Mi' ] = Mi component component['yiMi'] = component['yi'] * component['Mi'] component Ma = component.sum(axis=0) Ma "Hence the apparent weight is 18.042" Ma= 18.042 specificgravity= Ma/28.96 specificgravity
_____no_output_____
MIT
Chapter 2 Reservoir Fluid Properties/Examples/Chapter 2 Examples.ipynb
boomitsheth/Reservoir-Engineering-Handbook-
Example 2.5 Calculation of Gas Compressibility Factor
components = pd.read_csv("F:\Tarek Ahmed Reservoir Engineering Data\Chapter 2 Reservoir Fluid Properties\example2.5.csv") components pressure = 3000 #psia temp = 180 #Fdegree Tci =[547.91,227.49,343.33,549.92,666.06,734.46,765.62] components['Tci'] = Tci components components['yiTci'] = components['yi'] * components['Tci'] components Pci=[1071,493.1,666.4,706.5,616.4,527.9,550.6] components['Pci']= Pci components components['yiPci'] = components['yi'] * components['Pci'] components components.sum(axis=0)
_____no_output_____
MIT
Chapter 2 Reservoir Fluid Properties/Examples/Chapter 2 Examples.ipynb
boomitsheth/Reservoir-Engineering-Handbook-
Example 2.17 Calculate Specific Gravity of Separated Gas
"Separator tests were conducted on a crude oil sample. Results of the test in terms of GOR and Gas Specific Gravity are calculated. " results = pd.read_csv("F:\Tarek Ahmed Reservoir Engineering Data\Chapter 2 Reservoir Fluid Properties\example2.17.csv") results results['GORGasSg']= results['GOR'] * results['GasSg'] results results.sum() Sg = 806.212 / 984 Sg
_____no_output_____
MIT
Chapter 2 Reservoir Fluid Properties/Examples/Chapter 2 Examples.ipynb
boomitsheth/Reservoir-Engineering-Handbook-
Example 2.18 Using Standing Correlation, Estimate Gas Solubility at Pb
table= pd.read_csv("F:\Tarek Ahmed Reservoir Engineering Data\Chapter 2 Reservoir Fluid Properties\example2.18.csv") table #T=Reservoir Temperature #Pb=Bubble Point Pressure #Bo=Oil Formation Volume Factor #Psep=Sperator Pressure #Tsep=separator Temperature #Co=Isothermal compressibility coeffeicient of the oil table['x']= (0.0125*table['API'].astype(float)) - 0.00091*table['T'].astype(float) table table['10^x'] = pow(10,table['x']) table table['Predicted Rs']= table['SG'] * pow((((table['Pb'].astype(float)/18.2)+1.4) * table['10^x'] ),1.2048) table #Standing Corelation used to calculate the predicted Solubility #to calculate the absolute average error in prediction of solubility at bubble point pressure table['%error'] = (table['Predicted Rs'] - table['Rs']) * 100 / table['Predicted Rs'] table
_____no_output_____
MIT
Chapter 2 Reservoir Fluid Properties/Examples/Chapter 2 Examples.ipynb
boomitsheth/Reservoir-Engineering-Handbook-
Mining TwitterTwitter implements OAuth 1.0A as its standard authentication mechanism, and in order to use it to make requests to Twitter's API, you'll need to go to https://developer.twitter.com/en/apps and create a sample application. There are four primary identifiers you'll need to note for an OAuth 1.0A workflow: consumer key, consumer secret, access token, and access token secret. Note that you will need an ordinary Twitter account in order to login, create an app, and get these credentials. If you are taking advantage of the virtual machine experience for this chapter that is powered by Vagrant, you should just be able to execute the code in this notebook without any worries whatsoever about installing dependencies. If you are running the code from your own development envioronment, however, be advised that these examples in this chapter take advantage of a Python package called [twitter](https://github.com/sixohsix/twitter) to make API calls. You can install this package in a terminal with [pip](https://pypi.python.org/pypi/pip) with the command `pip install twitter`, preferably from within a [Python virtual environment](https://pypi.python.org/pypi/virtualenv). Once installed, you should be able to open up a Python interpreter (or better yet, your [IPython](http://ipython.org/) interpreter) and get rolling. Authorizing an application to access Twitter account data
import twitter # Go to https://developer.twitter.com/en/apps to create an app and get values # for these credentials, which you'll need to provide in place of these # empty string values that are defined as placeholders. # See https://developer.twitter.com/en/docs/basics/authentication/overview/oauth # for more information on Twitter's OAuth implementation. CONSUMER_KEY = '' CONSUMER_SECRET = '' OAUTH_TOKEN = '' OAUTH_TOKEN_SECRET = '' auth = twitter.oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET, CONSUMER_KEY, CONSUMER_SECRET) twitter_api = twitter.Twitter(auth=auth) # Nothing to see by displaying twitter_api except that it's now a # defined variable print(twitter_api)
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Retrieving trends
# The Yahoo! Where On Earth ID for the entire world is 1. # See https://dev.twitter.com/docs/api/1.1/get/trends/place and # http://developer.yahoo.com/geo/geoplanet/ WORLD_WOE_ID = 1 US_WOE_ID = 23424977 # Prefix ID with the underscore for query string parameterization. # Without the underscore, the twitter package appends the ID value # to the URL itself as a special case keyword argument. world_trends = twitter_api.trends.place(_id=WORLD_WOE_ID) us_trends = twitter_api.trends.place(_id=US_WOE_ID) print(world_trends) print() print(us_trends) for trend in world_trends[0]['trends']: print(trend['name']) for trend in us_trends[0]['trends']: print(trend['name']) world_trends_set = set([trend['name'] for trend in world_trends[0]['trends']]) us_trends_set = set([trend['name'] for trend in us_trends[0]['trends']]) common_trends = world_trends_set.intersection(us_trends_set) print(common_trends)
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Anatomy of a Tweet
import json # Set this variable to a trending topic, # or anything else for that matter. The example query below # was a trending topic when this content was being developed # and is used throughout the remainder of this chapter. q = '#MothersDay' count = 100 # Import unquote to prevent url encoding errors in next_results from urllib.parse import unquote # See https://dev.twitter.com/rest/reference/get/search/tweets search_results = twitter_api.search.tweets(q=q, count=count) statuses = search_results['statuses'] # Iterate through 5 more batches of results by following the cursor for _ in range(5): print('Length of statuses', len(statuses)) try: next_results = search_results['search_metadata']['next_results'] except KeyError as e: # No more results when next_results doesn't exist break # Create a dictionary from next_results, which has the following form: # ?max_id=847960489447628799&q=%23RIPSelena&count=100&include_entities=1 kwargs = dict([ kv.split('=') for kv in unquote(next_results[1:]).split("&") ]) search_results = twitter_api.search.tweets(**kwargs) statuses += search_results['statuses'] # Show one sample search result by slicing the list... print(json.dumps(statuses[0], indent=1)) for i in range(10): print() print(statuses[i]['text']) print('Favorites: ', statuses[i]['favorite_count']) print('Retweets: ', statuses[i]['retweet_count'])
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Extracting text, screen names, and hashtags from tweets
status_texts = [ status['text'] for status in statuses ] screen_names = [ user_mention['screen_name'] for status in statuses for user_mention in status['entities']['user_mentions'] ] hashtags = [ hashtag['text'] for status in statuses for hashtag in status['entities']['hashtags'] ] # Compute a collection of all words from all tweets words = [ w for t in status_texts for w in t.split() ] # Explore the first 5 items for each... print(json.dumps(status_texts[0:5], indent=1)) print(json.dumps(screen_names[0:5], indent=1) ) print(json.dumps(hashtags[0:5], indent=1)) print(json.dumps(words[0:5], indent=1))
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Creating a basic frequency distribution from the words in tweets
from collections import Counter for item in [words, screen_names, hashtags]: c = Counter(item) print(c.most_common()[:10]) # top 10 print()
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Using prettytable to display tuples in a nice tabular format
from prettytable import PrettyTable for label, data in (('Word', words), ('Screen Name', screen_names), ('Hashtag', hashtags)): pt = PrettyTable(field_names=[label, 'Count']) c = Counter(data) [ pt.add_row(kv) for kv in c.most_common()[:10] ] pt.align[label], pt.align['Count'] = 'l', 'r' # Set column alignment print(pt)
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Calculating lexical diversity for tweets
# A function for computing lexical diversity def lexical_diversity(tokens): return len(set(tokens))/len(tokens) # A function for computing the average number of words per tweet def average_words(statuses): total_words = sum([ len(s.split()) for s in statuses ]) return total_words/len(statuses) print(lexical_diversity(words)) print(lexical_diversity(screen_names)) print(lexical_diversity(hashtags)) print(average_words(status_texts))
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Finding the most popular retweets
retweets = [ # Store out a tuple of these three values ... (status['retweet_count'], status['retweeted_status']['user']['screen_name'], status['retweeted_status']['id'], status['text']) # ... for each status ... for status in statuses # ... so long as the status meets this condition. if 'retweeted_status' in status.keys() ] # Slice off the first 5 from the sorted results and display each item in the tuple pt = PrettyTable(field_names=['Count', 'Screen Name', 'Tweet ID', 'Text']) [ pt.add_row(row) for row in sorted(retweets, reverse=True)[:5] ] pt.max_width['Text'] = 50 pt.align= 'l' print(pt)
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Looking up users who have retweeted a status
# Get the original tweet id for a tweet from its retweeted_status node # and insert it here _retweets = twitter_api.statuses.retweets(id=862359093398261760) print([r['user']['screen_name'] for r in _retweets])
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Plotting frequencies of words
import matplotlib.pyplot as plt %matplotlib inline word_counts = sorted(Counter(words).values(), reverse=True) plt.loglog(word_counts) plt.ylabel("Freq") plt.xlabel("Word Rank")
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Generating histograms of words, screen names, and hashtags
for label, data in (('Words', words), ('Screen Names', screen_names), ('Hashtags', hashtags)): # Build a frequency map for each set of data # and plot the values c = Counter(data) plt.hist(list(c.values())) # Add a title and y-label ... plt.title(label) plt.ylabel("Number of items in bin") plt.xlabel("Bins (number of times an item appeared)") # ... and display as a new figure plt.figure()
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Generating a histogram of retweet counts
# Using underscores while unpacking values in # a tuple is idiomatic for discarding them counts = [count for count, _, _, _ in retweets] plt.hist(counts) plt.title('Retweets') plt.xlabel('Bins (number of times retweeted)') plt.ylabel('Number of tweets in bin')
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Sentiment Analysis
# pip install nltk import nltk nltk.download('vader_lexicon') import numpy as np from nltk.sentiment.vader import SentimentIntensityAnalyzer twitter_stream = twitter.TwitterStream(auth=auth) iterator = twitter_stream.statuses.sample() tweets = [] for tweet in iterator: try: if tweet['lang'] == 'en': tweets.append(tweet) except: pass if len(tweets) == 100: break analyzer = SentimentIntensityAnalyzer() analyzer.polarity_scores('Hello') analyzer.polarity_scores('I really enjoy this video series.') analyzer.polarity_scores('I REALLY enjoy this video series.') analyzer.polarity_scores('I REALLY enjoy this video series!!!') analyzer.polarity_scores('I REALLY did not enjoy this video series!!!') scores = np.zeros(len(tweets)) for i, t in enumerate(tweets): # Extract the text portion of the tweet text = t['text'] # Measure the polarity of the tweet polarity = analyzer.polarity_scores(text) # Store the normalized, weighted composite score scores[i] = polarity['compound'] most_positive = np.argmax(scores) most_negative = np.argmin(scores) print('{0:6.3f} : "{1}"'.format(scores[most_positive], tweets[most_positive]['text'])) print('{0:6.3f} : "{1}"'.format(scores[most_negative], tweets[most_negative]['text']))
_____no_output_____
BSD-2-Clause
notebooks/Chapter 1 - Mining Twitter.ipynb
KaranamVijayKumar/Mining-the-Social-Web-3rd-Edition
Text models, data, and training
from fastai.gen_doc.nbdoc import *
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
The [`text`](/text.htmltext) module of the fastai library contains all the necessary functions to define a Dataset suitable for the various NLP (Natural Language Processing) tasks and quickly generate models you can use for them. Specifically:- [`text.transform`](/text.transform.htmltext.transform) contains all the scripts to preprocess your data, from raw text to token ids,- [`text.data`](/text.data.htmltext.data) contains the definition of [`TextDataset`](/text.data.htmlTextDataset), which the main class you'll need in NLP,- [`text.learner`](/text.learner.htmltext.learner) contains helper functions to quickly create a language model or an RNN classifier.Have a look at the links above for full details of the API of each module, of read on for a quick overview. Quick Start: Training an IMDb sentiment model with *ULMFiT* Let's start with a quick end-to-end example of training a model. We'll train a sentiment classifier on a sample of the popular IMDb data, showing 4 steps:1. Reading and viewing the IMDb data1. Getting your data ready for modeling1. Fine-tuning a language model1. Building a classifier Reading and viewing the IMDb data First let's import everything we need for text.
from fastai import * from fastai.text import *
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
Contrary to images in Computer Vision, text can't directly be transformed into numbers to be fed into a model. The first thing we need to do is to preprocess our data so that we change the raw texts to lists of words, or tokens (a step that is called tokenization) then transform these tokens into numbers (a step that is called numericalization). These numbers are then passed to embedding layers that wil convert them in arrays of floats before passing them through a model.You can find on the web plenty of [Word Embeddings](https://en.wikipedia.org/wiki/Word_embedding) to directly convert your tokens into floats. Those word embeddings have generally be trained on a large corpus such as wikipedia. Following the work of [ULMFiT](https://arxiv.org/abs/1801.06146), the fastai library is more focused on using pre-trained Language Models and fine-tuning them. Word embeddings are just vectors of 300 or 400 floats that represent different words, but a pretrained language model not only has those, but has also been trained to get a representation of full sentences and documents.That's why the library is structured around three steps:1. Get your data preprocessed and ready to use in a minimum amount of code,1. Create a language model with pretrained weights that you can fine-tune to your dataset,1. Create other models such as classifiers on top of the encoder of the language model.To show examples, we have provided a small sample of the [IMDB dataset](https://www.imdb.com/interfaces/) which contains 1,000 reviews of movies with labels (positive or negative).
path = untar_data(URLs.IMDB_SAMPLE) path
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
Creating a dataset from your raw texts is very simple if you have it in one of those ways- organized it in folders in an ImageNet style- organized in a csv file with labels columns and a text columnsHere, the sample from imdb is in a texts csv files that looks like this:
df = pd.read_csv(path/'texts.csv') df.head()
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
Getting your data ready for modeling
for file in ['train_tok.npy', 'valid_tok.npy']: if os.path.exists(path/'tmp'/file): os.remove(path/'tmp'/file)
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
To get a [`DataBunch`](/basic_data.htmlDataBunch) quickly, there are also several factory methods depending on how our data is structured. They are all detailed in [`text.data`](/text.data.htmltext.data), here we'll use the method from_csv of the [`TextLMDataBunch`](/text.data.htmlTextLMDataBunch) (to get the data ready for a language model) and [`TextClasDataBunch`](/text.data.htmlTextClasDataBunch) (to get the data ready for a text classifier) classes.
# Language model data data_lm = TextLMDataBunch.from_csv(path, 'texts.csv') # Classifier model data data_clas = TextClasDataBunch.from_csv(path, 'texts.csv', vocab=data_lm.train_ds.vocab, bs=32)
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
This does all the necessary preprocessing behing the scene. For the classifier, we also pass the vocabulary (mapping from ids to words) that we want to use: this is to ensure that `data_clas` will use the same dictionary as `data_lm`.Since this step can be a bit time-consuming, it's best to save the result with:
data_lm.save() data_clas.save()
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
This will create a 'tmp' directory where all the computed stuff will be stored. You can then reload those results with:
data_lm = TextLMDataBunch.load(path) data_clas = TextClasDataBunch.load(path, bs=32)
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
Note that you can load the data with different [`DataBunch`](/basic_data.htmlDataBunch) parameters (batch size, `bptt`,...) Fine-tuning a language model We can use the `data_lm` object we created earlier to fine-tune a pretrained language model. [fast.ai](http://www.fast.ai/) has an English model available that we can download. We can create a learner object that will directly create a model, download the pretrained weights and be ready for fine-tuning.
learn = language_model_learner(data_lm, pretrained_model=URLs.WT103, drop_mult=0.5) learn.fit_one_cycle(1, 1e-2)
Total time: 00:04 epoch train_loss valid_loss accuracy 1 4.720898 4.212008 0.248862 (00:04)
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
Like a computer vision model, we can then unfreeze the model and fine-tune it.
learn.unfreeze() learn.fit_one_cycle(1, 1e-3)
Total time: 00:22 epoch train_loss valid_loss accuracy 1 4.450525 4.127853 0.253167 (00:22)
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
To evaluate your language model, you can run the [`Learner.predict`](/basic_train.htmlLearner.predict) method and specify the number of words you want it to guess.
learn.predict("This is a review about", n_words=10)
Total time: 00:00
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
It doesn't make much sense (we have a tiny vocabulary here and didn't train much on it) but note that it respects basic grammar (which comes from the pretrained model).Finally we save the encoder to be able to use it for classification in the next section.
learn.save_encoder('ft_enc')
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
Building a classifier We now use the `data_clas` object we created earlier to build a classifier with our fine-tuned encoder. The learner object can be done in a single line.
learn = text_classifier_learner(data_clas, drop_mult=0.5) learn.load_encoder('ft_enc') learn.fit_one_cycle(1, 1e-2)
Total time: 00:26 epoch train_loss valid_loss accuracy 1 0.686503 0.632651 0.701493 (00:26)
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
Again, we can unfreeze the model and fine-tune it.
learn.freeze_to(-2) learn.fit_one_cycle(1, slice(5e-3/2., 5e-3)) learn.unfreeze() learn.fit_one_cycle(1, slice(2e-3/100, 2e-3))
Total time: 00:55 epoch train_loss valid_loss accuracy 1 0.510760 0.479997 0.791045 (00:55)
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
Again, we can predict on a raw text by using the [`Learner.predict`](/basic_train.htmlLearner.predict) method.
learn.predict("This was a great movie!")
_____no_output_____
Apache-2.0
docs_src/text.ipynb
holmesal/fastai
Project 5: NLP on Financial Statements InstructionsEach problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a ` TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity. PackagesWhen you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.The other packages that we're importing are `project_helper` and `project_tests`. These are custom packages built to help you solve the problems. The `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems. Install Packages
import sys !{sys.executable} -m pip install -r requirements.txt
Collecting alphalens==0.3.2 (from -r requirements.txt (line 1)) [?25l Downloading https://files.pythonhosted.org/packages/a5/dc/2f9cd107d0d4cf6223d37d81ddfbbdbf0d703d03669b83810fa6b97f32e5/alphalens-0.3.2.tar.gz (18.9MB)  100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18.9MB 1.8MB/s eta 0:00:01 3% |β–ˆβ– | 696kB 9.8MB/s eta 0:00:02 26% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 5.0MB 25.4MB/s eta 0:00:01 37% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 7.0MB 21.1MB/s eta 0:00:01 43% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 8.1MB 23.9MB/s eta 0:00:01 71% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 13.5MB 21.4MB/s eta 0:00:01 [?25hCollecting nltk==3.3.0 (from -r requirements.txt (line 2)) [?25l Downloading https://files.pythonhosted.org/packages/50/09/3b1755d528ad9156ee7243d52aa5cd2b809ef053a0f31b53d92853dd653a/nltk-3.3.0.zip (1.4MB)  100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.4MB 8.0MB/s eta 0:00:01 20% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 286kB 19.6MB/s eta 0:00:01 [?25hCollecting numpy==1.13.3 (from -r requirements.txt (line 3)) [?25l Downloading https://files.pythonhosted.org/packages/57/a7/e3e6bd9d595125e1abbe162e323fd2d06f6f6683185294b79cd2cdb190d5/numpy-1.13.3-cp36-cp36m-manylinux1_x86_64.whl (17.0MB)  100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 17.0MB 1.9MB/s eta 0:00:01 2% |β–‰ | 440kB 19.2MB/s eta 0:00:01 8% |β–ˆβ–ˆβ–‹ | 1.4MB 18.8MB/s eta 0:00:01 31% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 5.3MB 20.1MB/s eta 0:00:01 37% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 6.3MB 20.9MB/s eta 0:00:01 43% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 7.3MB 20.5MB/s eta 0:00:01 60% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 10.3MB 21.4MB/s eta 0:00:01 66% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 11.3MB 20.9MB/s eta 0:00:01 72% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 12.2MB 22.4MB/s eta 0:00:01 77% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 13.1MB 21.0MB/s eta 0:00:01 83% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 14.1MB 22.0MB/s eta 0:00:01 [?25hCollecting ratelimit==2.2.0 (from -r requirements.txt (line 4)) Downloading https://files.pythonhosted.org/packages/b5/73/956d739706da2f74891ba46391381ce7e680dce27cce90df7c706512d5bf/ratelimit-2.2.0.tar.gz Requirement already satisfied: requests==2.18.4 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 5)) (2.18.4) Requirement already satisfied: scikit-learn==0.19.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 6)) (0.19.1) Requirement already satisfied: six==1.11.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 7)) (1.11.0) Collecting tqdm==4.19.5 (from -r requirements.txt (line 8)) [?25l Downloading https://files.pythonhosted.org/packages/71/3c/341b4fa23cb3abc335207dba057c790f3bb329f6757e1fcd5d347bcf8308/tqdm-4.19.5-py2.py3-none-any.whl (51kB)  100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 61kB 12.4MB/s ta 0:00:01 [?25hRequirement already satisfied: matplotlib>=1.4.0 in /opt/conda/lib/python3.6/site-packages (from alphalens==0.3.2->-r requirements.txt (line 1)) (2.1.0) Requirement already satisfied: pandas>=0.18.0 in /opt/conda/lib/python3.6/site-packages (from alphalens==0.3.2->-r requirements.txt (line 1)) (0.23.3) Requirement already satisfied: scipy>=0.14.0 in /opt/conda/lib/python3.6/site-packages (from alphalens==0.3.2->-r requirements.txt (line 1)) (1.2.1) Requirement already satisfied: seaborn>=0.6.0 in /opt/conda/lib/python3.6/site-packages (from alphalens==0.3.2->-r requirements.txt (line 1)) (0.8.1) Requirement already satisfied: statsmodels>=0.6.1 in /opt/conda/lib/python3.6/site-packages (from alphalens==0.3.2->-r requirements.txt (line 1)) (0.8.0) Requirement already satisfied: IPython>=3.2.3 in /opt/conda/lib/python3.6/site-packages (from alphalens==0.3.2->-r requirements.txt (line 1)) (6.5.0) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 5)) (3.0.4) Requirement already satisfied: idna<2.7,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 5)) (2.6) Requirement already satisfied: urllib3<1.23,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 5)) (1.22) Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 5)) (2019.11.28) Requirement already satisfied: python-dateutil>=2.0 in /opt/conda/lib/python3.6/site-packages (from matplotlib>=1.4.0->alphalens==0.3.2->-r requirements.txt (line 1)) (2.6.1) Requirement already satisfied: pytz in /opt/conda/lib/python3.6/site-packages (from matplotlib>=1.4.0->alphalens==0.3.2->-r requirements.txt (line 1)) (2017.3) Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib>=1.4.0->alphalens==0.3.2->-r requirements.txt (line 1)) (0.10.0) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/conda/lib/python3.6/site-packages (from matplotlib>=1.4.0->alphalens==0.3.2->-r requirements.txt (line 1)) (2.2.0) Requirement already satisfied: backcall in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (0.1.0) Requirement already satisfied: decorator in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (4.0.11) Requirement already satisfied: pexpect; sys_platform != "win32" in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (4.3.1) Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.15 in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (1.0.15) Requirement already satisfied: simplegeneric>0.8 in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (0.8.1) Requirement already satisfied: traitlets>=4.2 in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (4.3.2) Requirement already satisfied: setuptools>=18.5 in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (38.4.0) Requirement already satisfied: pygments in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (2.2.0) Requirement already satisfied: pickleshare in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (0.7.4) Requirement already satisfied: jedi>=0.10 in /opt/conda/lib/python3.6/site-packages (from IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (0.10.2) Requirement already satisfied: ptyprocess>=0.5 in /opt/conda/lib/python3.6/site-packages (from pexpect; sys_platform != "win32"->IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (0.5.2) Requirement already satisfied: wcwidth in /opt/conda/lib/python3.6/site-packages (from prompt-toolkit<2.0.0,>=1.0.15->IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (0.1.7) Requirement already satisfied: ipython-genutils in /opt/conda/lib/python3.6/site-packages (from traitlets>=4.2->IPython>=3.2.3->alphalens==0.3.2->-r requirements.txt (line 1)) (0.2.0) Building wheels for collected packages: alphalens, nltk, ratelimit Running setup.py bdist_wheel for alphalens ... [?25ldone [?25h Stored in directory: /root/.cache/pip/wheels/77/1e/9a/223b4c94d7f564f25d94b48ca5b9c53e3034016ece3fd8c8c1 Running setup.py bdist_wheel for nltk ... [?25ldone [?25h Stored in directory: /root/.cache/pip/wheels/d1/ab/40/3bceea46922767e42986aef7606a600538ca80de6062dc266c Running setup.py bdist_wheel for ratelimit ... [?25ldone [?25h Stored in directory: /root/.cache/pip/wheels/a6/2a/13/3c6e42757ca0b6873a60e0697d30f7dd9d521a52874c44f201 Successfully built alphalens nltk ratelimit tensorflow 1.3.0 requires tensorflow-tensorboard<0.2.0,>=0.1.0, which is not installed. moviepy 0.2.3.2 has requirement tqdm==4.11.2, but you'll have tqdm 4.19.5 which is incompatible. Installing collected packages: numpy, alphalens, nltk, ratelimit, tqdm Found existing installation: numpy 1.12.1 Uninstalling numpy-1.12.1:
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Load Packages
import nltk import numpy as np import pandas as pd import pickle import pprint import project_helper import project_tests from tqdm import tqdm
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Download NLP CorporaYou'll need two corpora to run this project: the stopwords corpus for removing stopwords and wordnet for lemmatizing.
nltk.download('stopwords') nltk.download('wordnet')
[nltk_data] Downloading package stopwords to /root/nltk_data... [nltk_data] Unzipping corpora/stopwords.zip. [nltk_data] Downloading package wordnet to /root/nltk_data... [nltk_data] Unzipping corpora/wordnet.zip.
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Get 10ksWe'll be running NLP analysis on 10-k documents. To do that, we first need to download the documents. For this project, we'll download 10-ks for a few companies. To lookup documents for these companies, we'll use their CIK. If you would like to run this against other stocks, we've provided the dict `additional_cik` for more stocks. However, the more stocks you try, the long it will take to run.
cik_lookup = { 'AMZN': '0001018724', 'BMY': '0000014272', 'CNP': '0001130310', 'CVX': '0000093410', 'FL': '0000850209', 'FRT': '0000034903', 'HON': '0000773840'} additional_cik = { 'AEP': '0000004904', 'AXP': '0000004962', 'BA': '0000012927', 'BK': '0001390777', 'CAT': '0000018230', 'DE': '0000315189', 'DIS': '0001001039', 'DTE': '0000936340', 'ED': '0001047862', 'EMR': '0000032604', 'ETN': '0001551182', 'GE': '0000040545', 'IBM': '0000051143', 'IP': '0000051434', 'JNJ': '0000200406', 'KO': '0000021344', 'LLY': '0000059478', 'MCD': '0000063908', 'MO': '0000764180', 'MRK': '0000310158', 'MRO': '0000101778', 'PCG': '0001004980', 'PEP': '0000077476', 'PFE': '0000078003', 'PG': '0000080424', 'PNR': '0000077360', 'SYY': '0000096021', 'TXN': '0000097476', 'UTX': '0000101829', 'WFC': '0000072971', 'WMT': '0000104169', 'WY': '0000106535', 'XOM': '0000034088'}
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Get list of 10-ksThe SEC has a limit on the number of calls you can make to the website per second. In order to avoid hiding that limit, we've created the `SecAPI` class. This will cache data from the SEC and prevent you from going over the limit.
sec_api = project_helper.SecAPI()
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
With the class constructed, let's pull a list of filled 10-ks from the SEC for each company.
from bs4 import BeautifulSoup def get_sec_data(cik, doc_type, start=0, count=60): newest_pricing_data = pd.to_datetime('2018-01-01') rss_url = 'https://www.sec.gov/cgi-bin/browse-edgar?action=getcompany' \ '&CIK={}&type={}&start={}&count={}&owner=exclude&output=atom' \ .format(cik, doc_type, start, count) sec_data = sec_api.get(rss_url) feed = BeautifulSoup(sec_data.encode('ascii'), 'xml').feed entries = [ ( entry.content.find('filing-href').getText(), entry.content.find('filing-type').getText(), entry.content.find('filing-date').getText()) for entry in feed.find_all('entry', recursive=False) if pd.to_datetime(entry.content.find('filing-date').getText()) <= newest_pricing_data] return entries
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Let's pull the list using the `get_sec_data` function, then display some of the results. For displaying some of the data, we'll use Amazon as an example.
example_ticker = 'AMZN' sec_data = {} for ticker, cik in cik_lookup.items(): sec_data[ticker] = get_sec_data(cik, '10-K') pprint.pprint(sec_data[example_ticker][:5])
[('https://www.sec.gov/Archives/edgar/data/1018724/000101872417000011/0001018724-17-000011-index.htm', '10-K', '2017-02-10'), ('https://www.sec.gov/Archives/edgar/data/1018724/000101872416000172/0001018724-16-000172-index.htm', '10-K', '2016-01-29'), ('https://www.sec.gov/Archives/edgar/data/1018724/000101872415000006/0001018724-15-000006-index.htm', '10-K', '2015-01-30'), ('https://www.sec.gov/Archives/edgar/data/1018724/000101872414000006/0001018724-14-000006-index.htm', '10-K', '2014-01-31'), ('https://www.sec.gov/Archives/edgar/data/1018724/000119312513028520/0001193125-13-028520-index.htm', '10-K', '2013-01-30')]
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Download 10-ksAs you see, this is a list of urls. These urls point to a file that contains metadata related to each filling. Since we don't care about the metadata, we'll pull the filling by replacing the url with the filling url.
raw_fillings_by_ticker = {} for ticker, data in sec_data.items(): raw_fillings_by_ticker[ticker] = {} for index_url, file_type, file_date in tqdm(data, desc='Downloading {} Fillings'.format(ticker), unit='filling'): if (file_type == '10-K'): file_url = index_url.replace('-index.htm', '.txt').replace('.txtl', '.txt') raw_fillings_by_ticker[ticker][file_date] = sec_api.get(file_url) print('Example Document:\n\n{}...'.format(next(iter(raw_fillings_by_ticker[example_ticker].values()))[:1000]))
Downloading AMZN Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 22/22 [00:03<00:00, 6.59filling/s] Downloading BMY Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 27/27 [00:05<00:00, 4.56filling/s] Downloading CNP Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:03<00:00, 5.34filling/s] Downloading CVX Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [00:05<00:00, 4.65filling/s] Downloading FL Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 22/22 [00:03<00:00, 5.65filling/s] Downloading FRT Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 29/29 [00:03<00:00, 7.95filling/s] Downloading HON Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25/25 [00:04<00:00, 5.04filling/s]
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Get DocumentsWith theses fillings downloaded, we want to break them into their associated documents. These documents are sectioned off in the fillings with the tags `` for the start of each document and `` for the end of each document. There's no overlap with these documents, so each `` tag should come after the `` with no `` tag in between.Implement `get_documents` to return a list of these documents from a filling. Make sure not to include the tag in the returned document text.
import re def get_documents(text): """ Extract the documents from the text Parameters ---------- text : str The text with the document strings inside Returns ------- extracted_docs : list of str The document strings found in `text` """ # TODO: Implement start_doc = re.compile(r'<DOCUMENT>') end_doc = re.compile(r'</DOCUMENT>') start_idx = [x.end() for x in re.finditer(start_doc, text)] end_idx = [x.start() for x in re.finditer(end_doc, text)] extracted_docs = [] for doc_start, doc_end in zip(start_idx, end_idx): extracted_docs.append(text[doc_start:doc_end]) return extracted_docs project_tests.test_get_documents(get_documents)
Tests Passed
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
With the `get_documents` function implemented, let's extract all the documents.
filling_documents_by_ticker = {} for ticker, raw_fillings in raw_fillings_by_ticker.items(): filling_documents_by_ticker[ticker] = {} for file_date, filling in tqdm(raw_fillings.items(), desc='Getting Documents from {} Fillings'.format(ticker), unit='filling'): filling_documents_by_ticker[ticker][file_date] = get_documents(filling) print('\n\n'.join([ 'Document {} Filed on {}:\n{}...'.format(doc_i, file_date, doc[:200]) for file_date, docs in filling_documents_by_ticker[example_ticker].items() for doc_i, doc in enumerate(docs)][:3]))
Getting Documents from AMZN Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 17/17 [00:00<00:00, 41.88filling/s] Getting Documents from BMY Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 23/23 [00:01<00:00, 20.93filling/s] Getting Documents from CNP Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15/15 [00:00<00:00, 21.72filling/s] Getting Documents from CVX Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:00<00:00, 23.42filling/s] Getting Documents from FL Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:00<00:00, 27.37filling/s] Getting Documents from FRT Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:00<00:00, 34.29filling/s] Getting Documents from HON Fillings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:00<00:00, 28.93filling/s]
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Get Document TypesNow that we have all the documents, we want to find the 10-k form in this 10-k filing. Implement the `get_document_type` function to return the type of document given. The document type is located on a line with the `` tag. For example, a form of type "TEST" would have the line `TEST`. Make sure to return the type as lowercase, so this example would be returned as "test".
def get_document_type(doc): """ Return the document type lowercased Parameters ---------- doc : str The document string Returns ------- doc_type : str The document type lowercased """ # TODO: Implement # (?<= positive lookbehind. matches a group before the main expression # without including it in the result # \w alpha numeric and underscore # + 1 or more # [^\n]+ 1 or more, anything but new line regex = re.compile(r'(?<=<TYPE>)\w+[^\n]+') doc_type = re.search(regex, doc).group(0).lower() return doc_type project_tests.test_get_document_type(get_document_type)
Tests Passed
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
With the `get_document_type` function, we'll filter out all non 10-k documents.
ten_ks_by_ticker = {} for ticker, filling_documents in filling_documents_by_ticker.items(): ten_ks_by_ticker[ticker] = [] for file_date, documents in filling_documents.items(): for document in documents: if get_document_type(document) == '10-k': ten_ks_by_ticker[ticker].append({ 'cik': cik_lookup[ticker], 'file': document, 'file_date': file_date}) project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['cik', 'file', 'file_date'])
[ { cik: '0001018724' file: '\n<TYPE>10-K\n<SEQUENCE>1\n<FILENAME>amzn-2016123... file_date: '2017-02-10'}, { cik: '0001018724' file: '\n<TYPE>10-K\n<SEQUENCE>1\n<FILENAME>amzn-2015123... file_date: '2016-01-29'}, { cik: '0001018724' file: '\n<TYPE>10-K\n<SEQUENCE>1\n<FILENAME>amzn-2014123... file_date: '2015-01-30'}, { cik: '0001018724' file: '\n<TYPE>10-K\n<SEQUENCE>1\n<FILENAME>amzn-2013123... file_date: '2014-01-31'}, { cik: '0001018724' file: '\n<TYPE>10-K\n<SEQUENCE>1\n<FILENAME>d445434d10k.... file_date: '2013-01-30'}, ]
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Preprocess the Data Clean UpAs you can see, the text for the documents are very messy. To clean this up, we'll remove the html and lowercase all the text.
def remove_html_tags(text): text = BeautifulSoup(text, 'html.parser').get_text() return text def clean_text(text): text = text.lower() text = remove_html_tags(text) return text
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Using the `clean_text` function, we'll clean up all the documents.
for ticker, ten_ks in ten_ks_by_ticker.items(): for ten_k in tqdm(ten_ks, desc='Cleaning {} 10-Ks'.format(ticker), unit='10-K'): ten_k['file_clean'] = clean_text(ten_k['file']) project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['file_clean'])
Cleaning AMZN 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 17/17 [00:35<00:00, 2.08s/10-K] Cleaning BMY 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 23/23 [01:15<00:00, 3.30s/10-K] Cleaning CNP 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15/15 [00:57<00:00, 3.83s/10-K] Cleaning CVX 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [01:52<00:00, 5.36s/10-K] Cleaning FL 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:25<00:00, 1.61s/10-K] Cleaning FRT 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:55<00:00, 2.93s/10-K] Cleaning HON 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [01:00<00:00, 3.04s/10-K]
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
LemmatizeWith the text cleaned up, it's time to distill the verbs down. Implement the `lemmatize_words` function to lemmatize verbs in the list of words provided.
from nltk.stem import WordNetLemmatizer from nltk.corpus import wordnet def lemmatize_words(words): """ Lemmatize words Parameters ---------- words : list of str List of words Returns ------- lemmatized_words : list of str List of lemmatized words """ # TODO: Implement WNL = WordNetLemmatizer() lemmatized_words = [WNL.lemmatize(w, 'v') for w in words] return lemmatized_words project_tests.test_lemmatize_words(lemmatize_words)
Tests Passed
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
With the `lemmatize_words` function implemented, let's lemmatize all the data.
word_pattern = re.compile('\w+') for ticker, ten_ks in ten_ks_by_ticker.items(): for ten_k in tqdm(ten_ks, desc='Lemmatize {} 10-Ks'.format(ticker), unit='10-K'): ten_k['file_lemma'] = lemmatize_words(word_pattern.findall(ten_k['file_clean'])) project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['file_lemma'])
Lemmatize AMZN 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 17/17 [00:04<00:00, 3.9110-K/s] Lemmatize BMY 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 23/23 [00:09<00:00, 2.4010-K/s] Lemmatize CNP 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15/15 [00:07<00:00, 1.9210-K/s] Lemmatize CVX 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:09<00:00, 2.3310-K/s] Lemmatize FL 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:03<00:00, 4.4210-K/s] Lemmatize FRT 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:05<00:00, 3.3110-K/s] Lemmatize HON 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:05<00:00, 3.6610-K/s]
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Remove Stopwords
from nltk.corpus import stopwords lemma_english_stopwords = lemmatize_words(stopwords.words('english')) for ticker, ten_ks in ten_ks_by_ticker.items(): for ten_k in tqdm(ten_ks, desc='Remove Stop Words for {} 10-Ks'.format(ticker), unit='10-K'): ten_k['file_lemma'] = [word for word in ten_k['file_lemma'] if word not in lemma_english_stopwords] print('Stop Words Removed')
Remove Stop Words for AMZN 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 17/17 [00:01<00:00, 9.2810-K/s] Remove Stop Words for BMY 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 23/23 [00:04<00:00, 5.6110-K/s] Remove Stop Words for CNP 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15/15 [00:03<00:00, 4.5310-K/s] Remove Stop Words for CVX 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 21/21 [00:03<00:00, 5.2910-K/s] Remove Stop Words for FL 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:01<00:00, 10.2810-K/s] Remove Stop Words for FRT 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:02<00:00, 7.5110-K/s] Remove Stop Words for HON 10-Ks: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:02<00:00, 8.7210-K/s]
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Analysis on 10ks Loughran McDonald Sentiment Word ListsWe'll be using the Loughran and McDonald sentiment word lists. These word lists cover the following sentiment:- Negative - Positive- Uncertainty- Litigious- Constraining- Superfluous- ModalThis will allow us to do the sentiment analysis on the 10-ks. Let's first load these word lists. We'll be looking into a few of these sentiments.
import os sentiments = ['negative', 'positive', 'uncertainty', 'litigious', 'constraining', 'interesting'] sentiment_df = pd.read_csv(os.path.join('..', '..', 'data', 'project_5_loughran_mcdonald', 'loughran_mcdonald_master_dic_2016.csv')) sentiment_df.columns = [column.lower() for column in sentiment_df.columns] # Lowercase the columns for ease of use # Remove unused information sentiment_df = sentiment_df[sentiments + ['word']] sentiment_df[sentiments] = sentiment_df[sentiments].astype(bool) sentiment_df = sentiment_df[(sentiment_df[sentiments]).any(1)] # Apply the same preprocessing to these words as the 10-k words sentiment_df['word'] = lemmatize_words(sentiment_df['word'].str.lower()) sentiment_df = sentiment_df.drop_duplicates('word') sentiment_df.head()
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Bag of Wordsusing the sentiment word lists, let's generate sentiment bag of words from the 10-k documents. Implement `get_bag_of_words` to generate a bag of words that counts the number of sentiment words in each doc. You can ignore words that are not in `sentiment_words`.
from collections import defaultdict, Counter from sklearn.feature_extraction.text import CountVectorizer def get_bag_of_words(sentiment_words, docs): """ Generate a bag of words from documents for a certain sentiment Parameters ---------- sentiment_words: Pandas Series Words that signify a certain sentiment docs : list of str List of documents used to generate bag of words Returns ------- bag_of_words : 2-d Numpy Ndarray of int Bag of words sentiment for each document The first dimension is the document. The second dimension is the word. """ # TODO: Implement # filter out words not in sentiment_words vectorizer = CountVectorizer(vocabulary=sentiment_words.values) word_matrix = vectorizer.fit_transform(docs) bag_of_words = word_matrix.toarray() return bag_of_words project_tests.test_get_bag_of_words(get_bag_of_words)
Tests Passed
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Using the `get_bag_of_words` function, we'll generate a bag of words for all the documents.
sentiment_bow_ten_ks = {} for ticker, ten_ks in ten_ks_by_ticker.items(): lemma_docs = [' '.join(ten_k['file_lemma']) for ten_k in ten_ks] sentiment_bow_ten_ks[ticker] = { sentiment: get_bag_of_words(sentiment_df[sentiment_df[sentiment]]['word'], lemma_docs) for sentiment in sentiments} project_helper.print_ten_k_data([sentiment_bow_ten_ks[example_ticker]], sentiments)
[ { negative: '[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0... positive: '[[16 0 0 ..., 0 0 0]\n [16 0 0 ..., 0 0 ... uncertainty: '[[0 0 0 ..., 1 1 3]\n [0 0 0 ..., 1 1 3]\n [0 0 0... litigious: '[[0 0 0 ..., 0 0 0]\n [0 0 0 ..., 0 0 0]\n [0 0 0... constraining: '[[0 0 0 ..., 0 0 2]\n [0 0 0 ..., 0 0 2]\n [0 0 0... interesting: '[[2 0 0 ..., 0 0 0]\n [2 0 0 ..., 0 0 0]\n [2 0 0...}, ]
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Jaccard SimilarityUsing the bag of words, let's calculate the jaccard similarity on the bag of words and plot it over time. Implement `get_jaccard_similarity` to return the jaccard similarities between each tick in time. Since the input, `bag_of_words_matrix`, is a bag of words for each time period in order, you just need to compute the jaccard similarities for each neighboring bag of words. Make sure to turn the bag of words into a boolean array when calculating the jaccard similarity.
from sklearn.metrics import jaccard_similarity_score def get_jaccard_similarity(bag_of_words_matrix): """ Get jaccard similarities for neighboring documents Parameters ---------- bag_of_words : 2-d Numpy Ndarray of int Bag of words sentiment for each document The first dimension is the document. The second dimension is the word. Returns ------- jaccard_similarities : list of float Jaccard similarities for neighboring documents """ # TODO: Implement jaccard_similarities = [] bag_of_words_matrix_bool = bag_of_words_matrix.astype(bool) # compute jaccard similary for neighboring docs for i in range(bag_of_words_matrix.shape[0]-1): jaccard_similarities.append(jaccard_similarity_score(bag_of_words_matrix_bool[i], bag_of_words_matrix_bool[i+1])) return jaccard_similarities project_tests.test_get_jaccard_similarity(get_jaccard_similarity)
Tests Passed
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Using the `get_jaccard_similarity` function, let's plot the similarities over time.
# Get dates for the universe file_dates = { ticker: [ten_k['file_date'] for ten_k in ten_ks] for ticker, ten_ks in ten_ks_by_ticker.items()} jaccard_similarities = { ticker: { sentiment_name: get_jaccard_similarity(sentiment_values) for sentiment_name, sentiment_values in ten_k_sentiments.items()} for ticker, ten_k_sentiments in sentiment_bow_ten_ks.items()} project_helper.plot_similarities( [jaccard_similarities[example_ticker][sentiment] for sentiment in sentiments], file_dates[example_ticker][1:], 'Jaccard Similarities for {} Sentiment'.format(example_ticker), sentiments)
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
TFIDFusing the sentiment word lists, let's generate sentiment TFIDF from the 10-k documents. Implement `get_tfidf` to generate TFIDF from each document, using sentiment words as the terms. You can ignore words that are not in `sentiment_words`.
from sklearn.feature_extraction.text import TfidfVectorizer def get_tfidf(sentiment_words, docs): """ Generate TFIDF values from documents for a certain sentiment Parameters ---------- sentiment_words: Pandas Series Words that signify a certain sentiment docs : list of str List of documents used to generate bag of words Returns ------- tfidf : 2-d Numpy Ndarray of float TFIDF sentiment for each document The first dimension is the document. The second dimension is the word. """ # TODO: Implement vectorizer = TfidfVectorizer(vocabulary=sentiment_words.values) # build tfidf matrix tfidf = vectorizer.fit_transform(docs) tfidf = tfidf.toarray() return tfidf project_tests.test_get_tfidf(get_tfidf)
Tests Passed
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Using the `get_tfidf` function, let's generate the TFIDF values for all the documents.
sentiment_tfidf_ten_ks = {} for ticker, ten_ks in ten_ks_by_ticker.items(): lemma_docs = [' '.join(ten_k['file_lemma']) for ten_k in ten_ks] sentiment_tfidf_ten_ks[ticker] = { sentiment: get_tfidf(sentiment_df[sentiment_df[sentiment]]['word'], lemma_docs) for sentiment in sentiments} project_helper.print_ten_k_data([sentiment_tfidf_ten_ks[example_ticker]], sentiments)
[ { negative: '[[ 0. 0. 0. ..., 0. ... positive: '[[ 0.22288432 0. 0. ..., 0. ... uncertainty: '[[ 0. 0. 0. ..., 0.005... litigious: '[[ 0. 0. 0. ..., 0. 0. 0.]\n [ 0. 0. 0. ..... constraining: '[[ 0. 0. 0. ..., 0. ... interesting: '[[ 0.01673784 0. 0. ..., 0. ...}, ]
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Cosine SimilarityUsing the TFIDF values, we'll calculate the cosine similarity and plot it over time. Implement `get_cosine_similarity` to return the cosine similarities between each tick in time. Since the input, `tfidf_matrix`, is a TFIDF vector for each time period in order, you just need to computer the cosine similarities for each neighboring vector.
from sklearn.metrics.pairwise import cosine_similarity def get_cosine_similarity(tfidf_matrix): """ Get cosine similarities for each neighboring TFIDF vector/document Parameters ---------- tfidf : 2-d Numpy Ndarray of float TFIDF sentiment for each document The first dimension is the document. The second dimension is the word. Returns ------- cosine_similarities : list of float Cosine similarities for neighboring documents """ # TODO: Implement cosine_similarities = list(np.diag(cosine_similarity(tfidf_matrix, tfidf_matrix), k=1)) return cosine_similarities project_tests.test_get_cosine_similarity(get_cosine_similarity)
Tests Passed
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Let's plot the cosine similarities over time.
cosine_similarities = { ticker: { sentiment_name: get_cosine_similarity(sentiment_values) for sentiment_name, sentiment_values in ten_k_sentiments.items()} for ticker, ten_k_sentiments in sentiment_tfidf_ten_ks.items()} project_helper.plot_similarities( [cosine_similarities[example_ticker][sentiment] for sentiment in sentiments], file_dates[example_ticker][1:], 'Cosine Similarities for {} Sentiment'.format(example_ticker), sentiments)
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Evaluate Alpha FactorsJust like we did in project 4, let's evaluate the alpha factors. For this section, we'll just be looking at the cosine similarities, but it can be applied to the jaccard similarities as well. Price DataLet's get yearly pricing to run the factor against, since 10-Ks are produced annually.
pricing = pd.read_csv('../../data/project_5_yr/yr-quotemedia.csv', parse_dates=['date']) pricing = pricing.pivot(index='date', columns='ticker', values='adj_close') pricing
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Dict to DataFrameThe alphalens library uses dataframes, so we we'll need to turn our dictionary into a dataframe.
cosine_similarities_df_dict = {'date': [], 'ticker': [], 'sentiment': [], 'value': []} for ticker, ten_k_sentiments in cosine_similarities.items(): for sentiment_name, sentiment_values in ten_k_sentiments.items(): for sentiment_values, sentiment_value in enumerate(sentiment_values): cosine_similarities_df_dict['ticker'].append(ticker) cosine_similarities_df_dict['sentiment'].append(sentiment_name) cosine_similarities_df_dict['value'].append(sentiment_value) cosine_similarities_df_dict['date'].append(file_dates[ticker][1:][sentiment_values]) cosine_similarities_df = pd.DataFrame(cosine_similarities_df_dict) cosine_similarities_df['date'] = pd.DatetimeIndex(cosine_similarities_df['date']).year cosine_similarities_df['date'] = pd.to_datetime(cosine_similarities_df['date'], format='%Y') cosine_similarities_df.head()
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Alphalens FormatIn order to use a lot of the alphalens functions, we need to aligned the indices and convert the time to unix timestamp. In this next cell, we'll do just that.
import alphalens as al factor_data = {} skipped_sentiments = [] for sentiment in sentiments: cs_df = cosine_similarities_df[(cosine_similarities_df['sentiment'] == sentiment)] cs_df = cs_df.pivot(index='date', columns='ticker', values='value') try: data = al.utils.get_clean_factor_and_forward_returns(cs_df.stack(), pricing, quantiles=5, bins=None, periods=[1]) factor_data[sentiment] = data except: skipped_sentiments.append(sentiment) if skipped_sentiments: print('\nSkipped the following sentiments:\n{}'.format('\n'.join(skipped_sentiments))) factor_data[sentiments[0]].head()
/opt/conda/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead. from pandas.core import datetools
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Alphalens Format with Unix TimeAlphalen's `factor_rank_autocorrelation` and `mean_return_by_quantile` functions require unix timestamps to work, so we'll also create factor dataframes with unix time.
unixt_factor_data = { factor: data.set_index(pd.MultiIndex.from_tuples( [(x.timestamp(), y) for x, y in data.index.values], names=['date', 'asset'])) for factor, data in factor_data.items()}
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Factor ReturnsLet's view the factor returns over time. We should be seeing it generally move up and to the right.
ls_factor_returns = pd.DataFrame() for factor_name, data in factor_data.items(): ls_factor_returns[factor_name] = al.performance.factor_returns(data).iloc[:, 0] (1 + ls_factor_returns).cumprod().plot()
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Basis Points Per Day per QuantileIt is not enough to look just at the factor weighted return. A good alpha is also monotonic in quantiles. Let's looks the basis points for the factor returns.
qr_factor_returns = pd.DataFrame() for factor_name, data in unixt_factor_data.items(): qr_factor_returns[factor_name] = al.performance.mean_return_by_quantile(data)[0].iloc[:, 0] (10000*qr_factor_returns).plot.bar( subplots=True, sharey=True, layout=(5,3), figsize=(14, 14), legend=False)
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Turnover AnalysisWithout doing a full and formal backtest, we can analyze how stable the alphas are over time. Stability in this sense means that from period to period, the alpha ranks do not change much. Since trading is costly, we always prefer, all other things being equal, that the ranks do not change significantly per period. We can measure this with the **Factor Rank Autocorrelation (FRA)**.
ls_FRA = pd.DataFrame() for factor, data in unixt_factor_data.items(): ls_FRA[factor] = al.performance.factor_rank_autocorrelation(data) ls_FRA.plot(title="Factor Rank Autocorrelation")
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Sharpe Ratio of the AlphasThe last analysis we'll do on the factors will be sharpe ratio. Let's see what the sharpe ratio for the factors are. Generally, a Sharpe Ratio of near 1.0 or higher is an acceptable single alpha for this universe.
daily_annualization_factor = np.sqrt(252) (daily_annualization_factor * ls_factor_returns.mean() / ls_factor_returns.std()).round(2)
_____no_output_____
Apache-2.0
NLP on Financial Statements/project_5_starter.ipynb
saidulislam/AI-for-Trading
Machine Learning Model Building Pipeline: Machine Learning Model BuildIn the following notebooks, I will take you through a practical example of each one of the steps in the Machine Learning model building pipeline that I learned throughout my experience and analyzing many kaggle notebooks. There will be a notebook for each one of the Machine Learning Pipeline steps:1. Data Analysis2. Feature Engineering3. Feature Selection4. Model Building**This is the notebook for step 4: Building the Final Machine Learning Model**We will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.=================================================================================================== Predicting Sale Price of HousesThe aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses. Why is this important? Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or underestimated, before making a buying judgment. What is the objective of the machine learning model?We aim to minimise the difference between the real price, and the estimated price by our model. We will evaluate model performance using the mean squared error (mse) and the root squared of the mean squared error (rmse). How do I download the dataset?To download the House Price dataset go this website:https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data==================================================================================================== House Prices dataset: Machine Learning Model buildIn the following cells, we will finally build our machine learning models, utilising the engineered data and the pre-selected features. Setting the seedIt is important to note that we are engineering variables and pre-processing data with the idea of deploying the model if we find business value in it. Therefore, from now on, for each step that includes some element of randomness, it is extremely important that we **set the seed**. This way, we can obtain reproducibility between our research and our development code.This is perhaps one of the most important lessons that I learned from my mistakes is **Always set the seeds**.Let's go ahead and load the dataset.
# to handle datasets import pandas as pd import numpy as np # for plotting import matplotlib.pyplot as plt %matplotlib inline # to build the models from sklearn.linear_model import Lasso # to evaluate the models from sklearn.metrics import mean_squared_error from math import sqrt # to visualise al the columns in the dataframe pd.pandas.set_option('display.max_columns', None) # load dataset # We load the datasets with the engineered values X_train = pd.read_csv('xtrain.csv') X_test = pd.read_csv('xtest.csv') X_train.head() # capture the target y_train = X_train['SalePrice'] y_test = X_test['SalePrice'] # load selected features features = pd.read_csv('selected_features.csv', header=None) features = [x for x in features[0]] features = features[1:] features # reduce the train and test set to the desired features X_train = X_train[features] X_test = X_test[features]
_____no_output_____
MIT
04 Model_Building_and_Evaluaion/Model_Building.ipynb
Karthikraja-Pandian/Project---House-Prices-Prediction
Regularised linear regressionRemember to set the seed.
# train the model lin_model = Lasso(alpha=0.005, random_state=0) # remember to set the random_state / seed lin_model.fit(X_train, y_train) # evaluate the model: # remember that we log transformed the output (SalePrice) in our feature engineering notebook # In order to get the true performance of the Lasso # we need to transform both the target and the predictions # back to the original house prices values. # We will evaluate performance using the mean squared error and the # root of the mean squared error pred = lin_model.predict(X_train) print('linear train mse: {}'.format(mean_squared_error(np.exp(y_train), np.exp(pred)))) print('linear train rmse: {}'.format(sqrt(mean_squared_error(np.exp(y_train), np.exp(pred))))) print() pred = lin_model.predict(X_test) print('linear test mse: {}'.format(mean_squared_error(np.exp(y_test), np.exp(pred)))) print('linear test rmse: {}'.format(sqrt(mean_squared_error(np.exp(y_test), np.exp(pred))))) print() print('Average house price: ', np.exp(y_train).median()) # let's evaluate our predictions respect to the original price plt.scatter(y_test, lin_model.predict(X_test)) plt.xlabel('True House Price') plt.ylabel('Predicted House Price') plt.title('Evaluation of Lasso Predictions')
_____no_output_____
MIT
04 Model_Building_and_Evaluaion/Model_Building.ipynb
Karthikraja-Pandian/Project---House-Prices-Prediction
We can see that our model is doing a pretty good job at estimating house prices.
# let's evaluate the distribution of the errors: # they should be fairly normally distributed errors = y_test - lin_model.predict(X_test) errors.hist(bins=15)
_____no_output_____
MIT
04 Model_Building_and_Evaluaion/Model_Building.ipynb
Karthikraja-Pandian/Project---House-Prices-Prediction
The distribution of the errors follows quite closely a gaussian distribution. That suggests that our model is doing a good job as well. Feature importance
# Finally, just for fun, let's look at the feature importance importance = pd.Series(np.abs(lin_model.coef_.ravel())) importance.index = features importance.sort_values(inplace=True, ascending=False) importance.plot.bar(figsize=(18,6)) plt.ylabel('Lasso Coefficients') plt.title('Feature Importance')
_____no_output_____
MIT
04 Model_Building_and_Evaluaion/Model_Building.ipynb
Karthikraja-Pandian/Project---House-Prices-Prediction
ENV / ATM 415: Climate Laboratory The planetary energy budget in CESM simulations Tuesday April 19 and Thursday April 21, 2016_____________________________________
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import netCDF4 as nc
/Users/Brian/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment. warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Open the output from our control simulation with the slab ocean version of the CESM:
## To read data over the internet control_filename = 'som_1850_f19.cam.h0.clim.nc' datapath = 'http://ramadda.atmos.albany.edu:8080/repository/opendap/latest/Top/Users/Brian+Rose/CESM+runs/' endstr = '/entry.das' control = nc.Dataset( datapath + 'som_1850_f19/' + control_filename + endstr ) ## To read from a local copy of the file ## (just a small subset of the total list of variables, to save disk space) #ontrol_filename = 'som_1850_f19.cam.h0.clim_subset.nc' #control = nc.Dataset( control_filename )
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
The full file from the online server contains many many variables, describing all aspects of the model climatology.Whether we see a long list or a short list in the following code block depends on whether we are reading the full output file or the much smaller subset:
for v in control.variables: print v
lev hyam hybm ilev hyai hybi P0 time date datesec lat lon slat slon w_stag time_bnds date_written time_written ntrm ntrn ntrk ndbase nsbase nbdate nbsec mdt nlon wnummax gw ndcur nscur co2vmr ch4vmr n2ovmr f11vmr f12vmr sol_tsi nsteph AEROD_v CLDHGH CLDICE CLDLIQ CLDLOW CLDMED CLDTOT CLOUD CONCLD DCQ DTCOND DTV EMIS FICE FLDS FLDSC FLNS FLNSC FLNT FLNTC FLUT FLUTC FSDS FSDSC FSDTOA FSNS FSNSC FSNT FSNTC FSNTOA FSNTOAC FSUTOA ICEFRAC ICIMR ICWMR LANDFRAC LHFLX LWCF MSKtem OCNFRAC OMEGA OMEGAT PBLH PHIS PRECC PRECL PRECSC PRECSL PS PSL Q QFLX QREFHT QRL QRS RELHUM SFCLDICE SFCLDLIQ SHFLX SNOWHICE SNOWHLND SOLIN SWCF T TAUX TAUY TGCLDCWP TGCLDIWP TGCLDLWP TH TH2d TMQ TREFHT TS TSMN TSMX U U10 U2d UTGWORO UU UV2d UV3d UW2d UW3d V V2d VD01 VQ VT VTH2d VTH3d VU VV W2d WTH3d Z3
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Today we need just a few of these variables:- `TS`: the surface temperature- `FLNT`: the longwave radiation at the top of the atmosphere (i.e. what we call the OLR)- `FSNT`: the net shortwave radiation at the top of the atmosphere (i.e. what we call the ASR)- `FLNTC`: the clear-sky OLR- `FSNTC`: the clear-sky ASR Take a look at some of the meta-data for these fields:
for field in ['TS', 'FLNT', 'FSNT', 'FLNTC', 'FSNTC']: print control.variables[field]
<type 'netCDF4._netCDF4.Variable'> float32 TS(time, lat, lon) units: K long_name: Surface temperature (radiative) cell_methods: time: mean time: mean unlimited dimensions: time current shape = (12, 96, 144) filling off <type 'netCDF4._netCDF4.Variable'> float32 FLNT(time, lat, lon) Sampling_Sequence: rad_lwsw units: W/m2 long_name: Net longwave flux at top of model cell_methods: time: mean time: mean unlimited dimensions: time current shape = (12, 96, 144) filling off <type 'netCDF4._netCDF4.Variable'> float32 FSNT(time, lat, lon) Sampling_Sequence: rad_lwsw units: W/m2 long_name: Net solar flux at top of model cell_methods: time: mean time: mean unlimited dimensions: time current shape = (12, 96, 144) filling off <type 'netCDF4._netCDF4.Variable'> float32 FLNTC(time, lat, lon) Sampling_Sequence: rad_lwsw units: W/m2 long_name: Clearsky net longwave flux at top of model cell_methods: time: mean time: mean unlimited dimensions: time current shape = (12, 96, 144) filling off <type 'netCDF4._netCDF4.Variable'> float32 FSNTC(time, lat, lon) Sampling_Sequence: rad_lwsw units: W/m2 long_name: Clearsky net solar flux at top of model cell_methods: time: mean time: mean unlimited dimensions: time current shape = (12, 96, 144) filling off
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Each one of these variables has dimensions `(12, 96, 144)`, which corresponds to time (12 months), latitude and longitude.Take a look at one of the coordinate variables:
print control.variables['lat']
<type 'netCDF4._netCDF4.Variable'> float64 lat(lat) long_name: latitude units: degrees_north unlimited dimensions: current shape = (96,) filling off
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Now let's load in the coordinate data, to use later for plotting:
lat = control.variables['lat'][:] lon = control.variables['lon'][:] print lat
[-90. -88.10526316 -86.21052632 -84.31578947 -82.42105263 -80.52631579 -78.63157895 -76.73684211 -74.84210526 -72.94736842 -71.05263158 -69.15789474 -67.26315789 -65.36842105 -63.47368421 -61.57894737 -59.68421053 -57.78947368 -55.89473684 -54. -52.10526316 -50.21052632 -48.31578947 -46.42105263 -44.52631579 -42.63157895 -40.73684211 -38.84210526 -36.94736842 -35.05263158 -33.15789474 -31.26315789 -29.36842105 -27.47368421 -25.57894737 -23.68421053 -21.78947368 -19.89473684 -18. -16.10526316 -14.21052632 -12.31578947 -10.42105263 -8.52631579 -6.63157895 -4.73684211 -2.84210526 -0.94736842 0.94736842 2.84210526 4.73684211 6.63157895 8.52631579 10.42105263 12.31578947 14.21052632 16.10526316 18. 19.89473684 21.78947368 23.68421053 25.57894737 27.47368421 29.36842105 31.26315789 33.15789474 35.05263158 36.94736842 38.84210526 40.73684211 42.63157895 44.52631579 46.42105263 48.31578947 50.21052632 52.10526316 54. 55.89473684 57.78947368 59.68421053 61.57894737 63.47368421 65.36842105 67.26315789 69.15789474 71.05263158 72.94736842 74.84210526 76.73684211 78.63157895 80.52631579 82.42105263 84.31578947 86.21052632 88.10526316 90. ]
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Surface temperature in the control simulation
# A re-usable function to make a map of a 2d field on a latitude / longitude grid def make_map(field_2d): # Make a filled contour plot fig = plt.figure(figsize=(10,5)) cax = plt.contourf(lon, lat, field_2d) # draw a single contour to outline the continents plt.contour( lon, lat, control.variables['LANDFRAC'][0,:,:], [0.5], colors='k') plt.xlabel('Longitude (degrees east)') plt.ylabel('Latitude (degrees north)') plt.colorbar(cax) # Here is a convenient function that takes the name of a variable in our CESM output # and make a map of its annual average def map_this(fieldname, dataset=control): field = dataset.variables[fieldname][:] field_annual = np.mean(field, axis=0) make_map(field_annual) # Use this function to make a quick map of the annual average surface temperature: map_this('TS')
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Computing a global average
# The lat/lon dimensions after taking the time average: TS_annual = np.mean(control.variables['TS'][:], axis=0) TS_annual.shape
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Define a little re-usable function to take the global average of any of these fields:
def global_mean(field_2d): '''This function takes a 2D array on a regular latitude-longitude grid and returns the global area-weighted average''' zonal_mean = np.mean(field_2d, axis=1) return np.average(zonal_mean, weights=np.cos(np.deg2rad(lat))) # Again, a convenience function that takes just the name of the model output field # and returns its time and global average def global_mean_this(fieldname, dataset=control): field = dataset.variables[fieldname][:] field_annual = np.mean(field, axis=0) return global_mean(field_annual)
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Now compute the global average surface temperature in the simulation:
global_mean_this('TS')
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Cloud cover in the control simulation The model simulates cloud amount in every grid box. The cloud field is thus 4-dimensional:
# This field is not included in the small subset file # so this will only work if you are reading the full file from the online server control.variables['CLOUD']
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
To simplify things we can just look at the **total cloud cover**, integrated from the surface to the top of the atmosphere:
control.variables['CLDTOT'] map_this('CLDTOT')
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Which parts of Earth are cloudy and which are not? (at least in this simulation) Exercise 1: Make three maps: ASR, OLR, and the net radiation ASR-OLR (all annual averages)What interesting features do you see on these maps?
# To get you started, here is the ASR map_this('FSNT') map_this('FLNT') net_radiation = np.mean(control.variables['FSNT'][:] - control.variables['FLNT'][:], axis=0) make_map(net_radiation)
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Exercise 2: Calculate the global average net radiation. Is it close to zero? What does that mean?
global_mean(net_radiation)
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Exercise 3: Make maps of the clear-sky ASR and clear-sky OLRThese diagnostics have been calculated by the GCM. Basically at every timestep, the GCM calculates the radiation twice: once with the clouds and once without the clouds. Exercise 4: Make a map of the Cloud Radiative EffectRecall that we define $CRE$ as$$ CRE = \left( ASR - ASR_{clear} \right) - \left( OLR - OLR_{clear} \right) $$This quantity is **positive** where the clouds have a **net warming effect** on the climate. Exercise 5: in the global average, are the clouds warming or cooling the climate in the CESM control simulation? Climate sensitivity in the CESM: the effects of doubling CO2 How much CO2 was in the atmosphere for the control simulation?This information is available in the full output file (this won't work with the local subset file):
# The meta-data: control.variables['co2vmr'] # The data themselves, expressed in ppm: control.variables['co2vmr'][:] * 1E6
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
Answer: the CO2 concentration is 284.7 ppm in the control simulation. Now we want to see how the climate changes in the CESM when we double CO2 and run it out to equilibrium.I have done this. Because we are using a slab ocean model, it reaches equilibrium after just a few decades.Let's now open up the output file from the 2xCO2 scenario:
## To read data over the internet # doubleCO2_filename = 'som_1850_2xCO2.cam.h0.clim.nc' # doubleCO2 = nc.Dataset( datapath + 'som_1850_f19/' + doubleCO2_filename + endstr ) ## To read from a local copy of the file ## (just a small subset of the total list of variables, to save disk space) doubleCO2_filename = 'som_1850_2xCO2.cam.h0.clim_subset.nc' doubleCO2 = nc.Dataset( doubleCO2_filename )
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site
This file has all the same fields as `control`, but they reflect the new equilibrium climate after doubling CO2.Let's verify the CO2 amount:
doubleCO2.variables['co2vmr'][:] * 1E6
_____no_output_____
MIT
notes/CESM_energy_budget.ipynb
brian-rose/env-415-site