markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Is there a relationship between the number of groups that a user has sent messages to and the number of messages that user has sent (total, or the median number to groups)? | working.plot.scatter('Number of Groups','Total Messages', xlim=(1,300), ylim=(1,20000), logx=False, logy=True) | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
It appears that there are interesting outliers here. Some who send a couple messages each to a large number of groups, but then a separate group of outliers that sends lots of messages and to lots of groups. That might be an elite component worthy of separate analysis. A density graph will show, however, that while there are people who send many messages to a small number of groups, still, most people are clustered around sending few messages, to few groups. | sns.jointplot(x='Number of Groups',y='Total Messages (log)', data=working, kind="kde", xlim=(0,50), ylim=(0,3)); | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
Relationships between groups and participants Can we learn implicit relationships between groups based on the messaging patterns of participants? PCA We want to work with just the data of people and how many messages they sent to each group. | df = people[people['Total Messages'] > 5]
df = df.drop(columns=['email','name','Total Messages','Number of Groups','Median Messages per Group'])
df = df.fillna(0) | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
Principal Component Analysis (PCA) will seek to explain the most variance in the samples (participants) based on the features (messages sent to different lists). Let's try with two components and see what PCA sees as the most distinguishing dimensions of IETF participation. | import sklearn
from sklearn.decomposition import PCA
scaled = sklearn.preprocessing.maxabs_scale(df)
pca = PCA(n_components=2, whiten=True)
pca.fit(scaled)
components_frame = pd.DataFrame(pca.components_)
components_frame.columns = df.columns
components_frame
for i, row in components_frame.iterrows():
print('\nComponent %d' % i)
r = row.sort_values(ascending=False)
print('Most positive correlation:\n %s' % r[:5].index.values)
print('Most negative correlation:\n %s' % r[-5:].index.values) |
Component 0
Most positive correlation:
['93attendees' '88attendees' '77attendees' '87attendees' 'bofchairs']
Most negative correlation:
['tap' 'eos' 'dmarc-report' 'web' 'spam']
Component 1
Most positive correlation:
['89all' '90all' '91all' '82all' '94all']
Most negative correlation:
['ippm' 'rtgwg' 'i-d-announce' 'l2vpn' 'l3vpn']
| MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
Component 0 is mostly routing (Layer 3 and Layer 2 VPNs, the routing area working group, interdomain routing. (IP Performance/Measurement seems different -- is it related?)Component 1 is all Internet area groups, mostly related to IPv6, and specifically different groups working on mobility-related extensions to IPv6. When data was unscaled, PCA components seemed to connect to ops and ipv6, a significantly different result.For our two components, we can see which features are most positively correlated and which are most negatively correlated. On positive correlation, looking up these groups, it seems like there is some meaningful coherence here. On Component 0, we see groups in the "ops" area: groups related to the management, configuration and measurement of networks. On the other component, we see groups in the Internet and transport areas: groups related to IPv6, the transport area and PSTN transport.That we see such different results when the data is first scaled by each feature perhaps suggests that the initial analysis was just picking up on the largest groups. | pca.explained_variance_ | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
The explained variance by our components seems extremely tiny. With two components (or the two most significant components), we can attempt a basic visualization as a scatter plot. | component_df = pd.DataFrame(pca.transform(df), columns=['PCA%i' % i for i in range(2)], index=df.index)
component_df.plot.scatter(x='PCA0',y='PCA1') | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
And with a larger number of components? | pca = PCA(n_components=10, whiten=True)
pca.fit(scaled)
components_frame = pd.DataFrame(pca.components_)
components_frame.columns = df.columns
for i, row in components_frame.iterrows():
print('\nComponent %d' % i)
r = row.sort_values(ascending=False)
print('Most positive correlation:\n %s' % r[:5].index.values)
print('Most negative correlation:\n %s' % r[-5:].index.values) |
Component 0
Most positive correlation:
['93attendees' '88attendees' '77attendees' '87attendees' 'bofchairs']
Most negative correlation:
['tap' 'eos' 'dmarc-report' 'web' 'spam']
Component 1
Most positive correlation:
['89all' '90all' '91all' '82all' '94all']
Most negative correlation:
['ippm' 'rtgwg' 'i-d-announce' 'l2vpn' 'l3vpn']
Component 2
Most positive correlation:
['l3vpn' 'l2vpn' 'adslmib' 'i-d-announce' 'psamp-text']
Most negative correlation:
['100attendees' '96attendees' '88attendees' '97attendees' '93attendees']
Component 3
Most positive correlation:
['88attendees' 'ngtrans' '94attendees' '96attendees' '93attendees']
Most negative correlation:
['websec' 'happiana' 'art' 'http-auth' 'apps-discuss']
Component 4
Most positive correlation:
['97attendees' '96attendees' 'rtgwg' '99attendees' 'rtg-yang-coord']
Most negative correlation:
['monami6' '68attendees' 'mip6' '77attendees' '72attendees']
Component 5
Most positive correlation:
['ianaplan' 'iasa20' 'v6ops' 'mtgvenue' 'ipv6']
Most negative correlation:
['martini' '87attendees' '81attendees' 'rai' 'dispatch']
Component 6
Most positive correlation:
['72attendees' 'opsawg' 'netconf' 'mib-doctors' 'supa']
Most negative correlation:
['94attendees' '99attendees' '96attendees' '100attendees' '97attendees']
Component 7
Most positive correlation:
['dispatch' 'rai' 'p2psip' 'martini' 'avtext']
Most negative correlation:
['ietf-message-headers' 'hubmib' 'happiana' 'psamp-text' 'apps-discuss']
Component 8
Most positive correlation:
['72attendees' 'idr' '81attendees' '74attendees' '75attendees']
Most negative correlation:
['bofchairs' 'sipcore' 'martini' 'rai' 'dispatch']
Component 9
Most positive correlation:
['tools-development' 'ietf-sow' 'agenda-tool' 'ccg' 'iola-wgcharter-tool']
Most negative correlation:
['mcic' 'vpn4dc' 'wgguide' 'apps-discuss' '77attendees']
| MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
There are definitely subject domain areas in these lists (the last one, for example, on groups related to phone calls and emergency services). Also interesting is the presence of some meta-topics, like `mtgvenue` or `policy` or `iasa20` (an IETF governance topic). _Future work: we might be able to use this sparse matrix of participation in different lists to provide recommendations of similarity. "People who send messages to the same mix of groups you send to also like this other list" or "People who like this list, also often like this list"._ Betweenness, PageRank and graph visualization Because we have people and the groups they send to, we can construct a _bipartite graph_. We'll use just the top 5000 people, in order to make complicated calculations run faster. | df = people.sort_values(by="Total Messages",ascending=False)[:5000]
df = df.drop(columns=['email','name','Total Messages','Number of Groups','Median Messages per Group'])
df = df.fillna(0)
import networkx as nx
G = nx.Graph()
for group in df.columns:
G.add_node(group,type="group")
for name, data in df.iterrows():
G.add_node(name,type="person")
for group, weight in data.items():
if weight > 0:
G.add_edge(name,group,weight=weight)
nx.is_bipartite(G) | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
Yep, it is bipartite! Now, we can export a graph file for use in visualization software Gephi. | nx.write_gexf(G,'ietf-participation-bipartite.gexf')
people_nodes, group_nodes = nx.algorithms.bipartite.sets(G) | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
We can calculate the "PageRank" of each person and group, using the weights (number of messages) between groups and people to distribute a kind of influence. | pr = nx.pagerank(G, weight="weight")
nx.set_node_attributes(G, "pagerank", pr)
sorted([node for node in list(G.nodes(data=True))
if node[1]['type'] == 'group'],
key=lambda x: x[1]['pagerank'],
reverse =True)[:10]
sorted([node for node in list(G.nodes(data=True))
if node[1]['type'] == 'person'],
key=lambda x: x[1]['pagerank'],
reverse =True)[:10] | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
However, PageRank is probably less informative than usual here, because this is a bipartite, non-directed graph. Instead, let's calculate a normalized, closeness centrality specific to bipartite graphs. | person_nodes = [node[0] for node in G.nodes(data=True) if node[1]['type'] == 'person'] | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
**NB: Slow operation for large graphs.** | cc = nx.algorithms.bipartite.centrality.closeness_centrality(G, person_nodes, normalized=True)
for node, value in list(cc.items()):
if type(node) not in [str, str]:
print(node)
print(value)
del cc[14350.0] # remove a spurious node value
nx.set_node_attributes(G, "closeness", cc)
sorted([node for node in list(G.nodes(data=True))
if node[1]['type'] == 'person'],
key=lambda x: x[1]['closeness'],
reverse=True)[:25] | _____no_output_____ | MIT | examples/experimental_notebooks/IETF Participants.ipynb | nllz/bigbang |
Data description`alldat` contains 39 sessions from 10 mice, data from Steinmetz et al, 2019. Time bins for all measurements are 10ms, starting 500ms before stimulus onset. The mouse had to determine which side has the highest contrast. For each `dat = alldat[k]`, you have the following fields:* `dat['mouse_name']`: mouse name* `dat['date_exp']`: when a session was performed* `dat['spks']`: neurons by trials by time bins. * `dat['brain_area']`: brain area for each neuron recorded. * `dat['contrast_right']`: contrast level for the right stimulus, which is always contralateral to the recorded brain areas.* `dat['contrast_left']`: contrast level for left stimulus. * `dat['gocue']`: when the go cue sound was played. * `dat['response_times']`: when the response was registered, which has to be after the go cue. The mouse can turn the wheel before the go cue (and nearly always does!), but the stimulus on the screen won't move before the go cue. * `dat['response']`: which side the response was (`-1`, `0`, `1`). When the right-side stimulus had higher contrast, the correct choice was `-1`. `0` is a no go response. * `dat['feedback_time']`: when feedback was provided. * `dat['feedback_type']`: if the feedback was positive (`+1`, reward) or negative (`-1`, white noise burst). * `dat['wheel']`: exact position of the wheel that the mice uses to make a response, binned at `10ms`. * `dat['pupil']`: pupil area (noisy, because pupil is very small) + pupil horizontal and vertical position. * `dat['lfp']`: recording of the local field potential in each brain area from this experiment, binned at `10ms`.* `dat['brain_area_lfp']`: brain area names for the LFP channels. * `dat['trough_to_peak']`: measures the width of the action potential waveform for each neuron. Widths `<=10` samples are "putative fast spiking neurons". * `dat['waveform_w']`: temporal components of spike waveforms. `w@u` reconstructs the time by channels action potential shape. * `dat['waveform_u]`: spatial components of spike waveforms.* `dat['%X%_passive']`: same as above for `X` = {`spks`, `lfp`, `pupil`, `wheel`, `contrast_left`, `contrast_right`} but for passive trials at the end of the recording when the mouse was no longer engaged and stopped making responses. Hypothesis* Prestimulus activity for trial pairs [incorrect → correct , correct → correct ] better predicts current trial outcome depending on the outcome of the previous trial. Spike Labeling* We were interested to classify our neurons based on its waveform properties, in order to do that we: * label them based on: [(Loren M. Frank, Emery N. Brown, and Matthew A. Wilson, 2001)](https://journals.physiology.org/doi/full/10.1152/jn.2001.86.4.2029) in which putative excitatory neurons (PE) had >0.4 trough to peak time $ms$ and 5 $Hz$ mean firing rate. | #@title Boundaries plot
dt_waveforms = 1/30000 # dt of waveform
binsize = dat['bin_size'] # bin times spikes
mean_firing = dat['spks'].mean(axis = (1,2)) * 1/binsize # computing mean firing rate
t_t_peak = dat['trough_to_peak'] * dt_waveforms * 1e3 # computing trough to peak time in ms
plt.scatter(mean_firing,t_t_peak)
plt.axhline(y=0.4,ls = '--', alpha = 0.5, c = 'r')
plt.axvline(x=5,ls = '--', alpha = 0.5, c = 'r')
plt.ylabel('Trough to peak ($ms$)')
plt.xlabel('Mean Firing Rate (Hz)');
| _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
Next, we create a dataframe with the related labels: | #@title Label DataFrame
import plotly.express as px
labeling_df = pd.DataFrame({
"Mean Firing Rate": mean_firing,
"Trough to peak": t_t_peak,
"Region": dat['brain_area'],
"Area":dat['brain_area']
})
labeling_df.replace(
{
"Area": {"CA1":"Hippocampus","DG":"Hippocampus","SUB":"Hippocampus",
"VISp": "Visual Ctx", "VISam":"Visual Ctx","MD":"Thalamus","LGd":"Thalamus", "LH":"Thalamus",
"PL":"Other Ctx","MOs":"Other Ctx","ACA":"Other Ctx"
}
}, inplace = True
)
# Labeling according to conditions, other is the default condition
labeling_df['Cell Type'] = "Other"
labeling_df.loc[(labeling_df['Mean Firing Rate']<5)&(labeling_df['Trough to peak']>0.4),'Cell Type'] = "Excitatory"
labeling_df.loc[(labeling_df['Mean Firing Rate']>5)&(labeling_df['Trough to peak']<0.4), 'Cell Type'] = "Inhibitory"
px.scatter(x="Mean Firing Rate", y ="Trough to peak", color = "Cell Type", data_frame = labeling_df) | _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
Raster plot* We are now able to separate the **trials** based on *correct and incorrect* responses and separate the **neurons** based on *putative cell type** Inhibitory cells * Other cells * Excitatory cells | #@title raster visualizer
from ipywidgets import interact
import ipywidgets as widgets
vis_right = dat['contrast_right'] # 0 - low - high
vis_left = dat['contrast_left'] # 0 - low - high
is_correct = np.sign(dat['response'])==np.sign(vis_left-vis_right)
def raster_visualizer(area,trial):
spikes= dat2['ss']
plt.figure(figsize=(9,5))
plt.eventplot(spikes[(labeling_df['Area']==area) & (labeling_df['Cell Type']=='Excitatory')][:,trial]);
plt.eventplot(spikes[(labeling_df['Area']==area) & (labeling_df['Cell Type']=='Other')][:,trial],color='k');
plt.eventplot(spikes[(labeling_df['Area']==area) & (labeling_df['Cell Type']=='Inhibitory')][:,trial],colors = 'r');
plt.yticks([]);
plt.vlines(0.5,0,len(spikes[(labeling_df['Area']==area)])-50,'gray','--',alpha=0.5)
plt.ylabel('Neurons');
plt.xlabel('Time ($s$)');
plt.title(f'Trial was correct?:{is_correct[trial]}')
interact(raster_visualizer, area=['Hippocampus','Visual Ctx','Thalamus','Other Ctx'], trial=(0,339));
#@title Mean firing rate based on response
# response = dat['response'] # right - nogo - left (-1, 0, 1)
def mean_firing(area):
Selection = (labeling_df['Area']==area) #& (labeling_df['Cell Type']=='Excitatory')
spikes = dat['spks'][Selection].mean(axis = 0) #selecting spikes
mean_fr_e = spikes[is_correct==True].mean(axis=(0))*1/binsize
mean_fr_i = spikes[is_correct==False].mean(axis=(0))*1/binsize
time = binsize * np.arange(dat['spks'].shape[-1])
plt.plot(time, mean_fr_e,label='correct')
plt.plot(time, mean_fr_i,label='incorrect')
plt.axvline(x=0.5,ls = '--', alpha = 0.5, c = 'r', label='Stim')
plt.axvline(x=np.mean(dat['response_time']),ls = '--', alpha = 0.5, c = 'k', label='Response')
plt.ylabel('Mean Firing Rate ($Hz$)')
plt.xlabel('Time ($ms$)')
plt.legend()
interact(mean_firing, area=['Hippocampus','Visual Ctx','Thalamus','Other Ctx']); | _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
Modeling* first cell creates the full data frame: * each column is a neuron (*except the last one which is the target variable*) * each row is a trial * each cell is mean firing rateIn this example we are taking the hippocampal region | #@title DataFrame construction
# selects only correct after incorrect trials and correct after correct trials
correct_after_i = np.where(np.diff(is_correct.astype('float32'))==1)[0]
idx_c_c = []
for i in range(len(is_correct)-1):
if is_correct[i] == 1 & is_correct[i+1]==1:
idx_c_c.append(i)
correct_after_c = np.array(idx_c_c)
idx = np.append(correct_after_i,correct_after_c)
c_based_on_pre = np.append(np.array([0]*len(correct_after_i)),np.array([1]*len(correct_after_c)))
def get_full_X_y(area,y):
bin_spk_hip = dat['spks'][(labeling_df['Area']== area)]
bin_spk_hip = np.moveaxis(bin_spk_hip[:,:,:50],1,0)
x= bin_spk_hip.mean(axis=2)
return x,y
def get_prestim_X_y(area,y):
bin_spk_hip = dat['spks'][(labeling_df['Area']== area)]
bin_spk_hip = np.moveaxis(bin_spk_hip,1,0)
x= bin_spk_hip[idx,:,:50].mean(axis=2)
return x,y
print('Available options: Hippocampus, Visual Ctx, Thalamus, Other Ctx')
area = input('Select the area to visualize:')
x,y = get_prestim_X_y(area,c_based_on_pre)
def construct_df(x,y,named=False):
if named == True:
X = pd.DataFrame(x,columns=[f"N{i}" for i in range(x.shape[1])])
else:
X = pd.DataFrame(x)
full_df = pd.concat([X,pd.Series(y,name='target')],axis=1)
return full_df
df = construct_df(x,y)
import seaborn as sns
sns.heatmap(x*1/binsize,cbar_kws={'label':'Mean Firing rate ($Hz$)'});
plt.ylabel('Trials');
plt.xlabel('Neuron (id)');
#@title Baseline model: Logistic Regression
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
print('Available options: Hippocampus, Visual Ctx, Thalamus, Other Ctx')
area = input('Select the area to fit:')
X_full,y_full = get_full_X_y(area,is_correct)
X_pres,y_pres = get_prestim_X_y(area,c_based_on_pre) #Getting the data
def LogReg(X,y):
model = LogisticRegression(C=1, solver="saga", max_iter=5000)
Log_model=model.fit(X,y)
accuracies = cross_val_score(model,X,y,cv=10)
auc = cross_val_score(model,X,y,cv=10,scoring='roc_auc')
return Log_model, accuracies, auc
model_full, accuracies_full, auc_full = LogReg(X_full,y_full)
model_pre, accuracies_pre, auc_pre = LogReg(X_pres,y_pres)
#@title Comparing Accuracy
output_df = pd.DataFrame({
'Model': np.concatenate((np.array(["Full"]*10),np.array(["Pre_Stim"]*10))),
"Accuracy": np.concatenate((accuracies_full,accuracies_pre))
})
sns.boxplot(x='Accuracy',y='Model',data=output_df);
plt.title(f'Accuracy at: {area}');
#@title Compare AUC
output_df = pd.DataFrame({
'Model': np.concatenate((np.array(["Full"]*10),np.array(["Paired"]*10))),
"AUC": np.concatenate((auc_full,auc_pre))
})
sns.boxplot(x='AUC',y='Model',data=output_df);
plt.title(f'AUC at: {area}'); | _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
This results make us think that the prestimulus activity **could** carry on information related with the previous trial.We realized that we had a imbalance problem so we proceeded to balance the classes: | !pip install imbalanced-learn --quiet
#@title Balancing function
def balancer(X,y,undersample = 0.5):
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
print('######################################################')
print(f"{np.sum(y)} original number of samples in the 1 class") # 1 class
print(f"{len(y)-np.sum(y)} original number of samples in the 0 class")
print('######################################################')
#model = LogisticRegression(C=1, solver="saga", max_iter=5000)
over = SMOTE("minority",random_state = 43)
under = RandomUnderSampler(sampling_strategy=undersample,random_state = 43)
steps = [('under', under), ('over', over)]
#steps = [('under', under), ('over', over)]
pipeline = Pipeline(steps=steps);
# transform the dataset
X, y = pipeline.fit_resample(X,y);
print('ooooooooooooooooooooooooooooooooooooooooooooooooooo')
print(f"{np.sum(y)} resampled data in the 1 class") # 1 class
print(f"{len(y)-np.sum(y)} resampled data in the 0 class")
print('ooooooooooooooooooooooooooooooooooooooooooooooooooo')
return X,y
b_X_pres, b_y_pres = balancer(X_pres,y_pres)
b_X_full, b_y_full = balancer(X_full,y_full,0.9)
#@title Compare accuracy with balanced classes
model_full, accuracies_full, auc_full = LogReg(b_X_full, b_y_full )
model_pre, accuracies_pre, auc_pre = LogReg(b_X_pres, b_y_pres)
output_df = pd.DataFrame({
'Model': np.concatenate((np.array(["Full"]*10),np.array(["Paired"]*10))),
"Accuracy": np.concatenate((accuracies_full,accuracies_pre))
})
sns.boxplot(x='Accuracy',y='Model',data=output_df);
plt.title(f'Accuracy at: {area}');
#@title Compare AUC with balanced classes
output_df = pd.DataFrame({
'Model': np.concatenate((np.array(["Full"]*10),np.array(["Paired"]*10))),
"AUC": np.concatenate((auc_full,auc_pre))
})
sns.boxplot(x='AUC',y='Model',data=output_df);
plt.title(f'AUC at: {area}'); | _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
After balancing the classes we see a decrease in accuracy but a fairly increase in AUC, which might mean that with the unbalanced dataset our classifier was assinging the most frequent class to every sample. Selection of a better modelNow that we know that pre-estimulus activity might be able to classify a correct trial based on the preceding output, we are in a position to use a better model and see if it is possible.In order to do that, we construct the dataframe with the balanced data, and setup the enviroment of pycaret, selecting the indicator variable of outcome pairs ([incorrect → correct , correct → correct ]) as the `target`.* `numeric_features` argument spcifies the datatype of those columns (pycaret was infering them as categorical)This procedure also split the data in train and test sets, and setups a 10 K-Fold CV process. | from pycaret.classification import *
resampled_df = construct_df(b_X_pres, b_y_pres,named=True)
exp_clf101 = setup(data = resampled_df, target = 'target', numeric_features=['N1','N22','N32','N143','N148','N153','N183','N184','N189'], session_id=123) |
Setup Succesfully Completed!
| MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
Here we are comparing different CV classification metrics from 14 different models, **Quadratic Discriminant Analysis** had the best performance | compare_models() | _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
Quadratic Discriminant AnalysisWe have to classes $k \in \{0,1\}$ that belongs to correct preceded by incorrect trial (0) and correct preceded by correct (1).Every class has a prior probability $P(k) = 0.5$ since is a balanced dataframe and $P(k) = \frac{N_k}{N}$.And basically we are trying to find the posterior probability of being in a class given the observations:$\rm{Pr}(K = k |X = x) = \frac{f_k(x) P(k)}{\sum_{l=1}^K f_l(x) P_1(k)}$The problem is related to which class maximazes that posterior probability:$C(k) = \arg \max_k \rm{Pr}(K = k | X = x) = \arg \max_k f_k(x) P(k)$and it is assumed that data has a gaussian likelihood:$f_k(x) = {|2 \pi \Sigma_k|}^{-1/2} \exp\left(-\frac{1}{2}(x - \mu_k)^T\Sigma_k^{-1}(x - \mu_k)\right)$Finally deriving $C(k)$:$C(k) = \arg \max_k( - \frac{1}{2} \log |\Sigma_k| - \frac{1}{2}(x- \mu_k)^T \Sigma_k^{-1} (x - \mu_k) + \log P(k))$Here we instanciate the model and make a 10-Fold CV run, having a final accuracy of **0.84** and an AUC of **0.84**. | qda = create_model('qda') | _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
Here we can see the ROC curve and the Precision-Recall curve of the classifier, describing that our classifier is able to discriminate between both clasess very well.Results in the test set: | plot_model(qda, plot = 'auc')
plot_model(qda, plot = 'pr') | _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
The confusion matrix show us that it is easier to classify correctly a correct trial preceded by a correct trial from only the pre-estimulus activity.Having 3 false positives in the test set: | plot_model(qda, plot = 'confusion_matrix') | _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
Finally we test out the model and retrieve the metrics with unseen data (our test set): | predict_model(qda); | _____no_output_____ | MIT | NMA_project.ipynb | AvocadoChutneys/ProjectNMA |
Finding the Max Sharpe Ratio PortfolioWe've already seen that given a set of expected returns and a covariance matrix, we can plot the efficient frontier. In this section, we'll extend the code to locate the point on the efficient frontier that we are most interested in, which is the tangency portfolio or the Max Sharpe Ratio portfolio.Let's start by the usual imports, and load in the data. | %load_ext autoreload
%autoreload 2
%matplotlib inline
import edhec_risk_kit_110 as erk
ind = erk.get_ind_returns()
er = erk.annualize_rets(ind["1996":"2000"], 12)
cov = ind["1996":"2000"].cov() | _____no_output_____ | CNRI-Python-GPL-Compatible | Investment Management/Course1/lab_110.ipynb | djoye21school/python |
We already know how to identify points on the curve if we are given a target rate of return. Instead of minimizing the vol based on a target return, we want to find that one point on the curve that maximizes the Sharpe Ratio, given the risk free rate.```pythondef msr(riskfree_rate, er, cov): """ Returns the weights of the portfolio that gives you the maximum sharpe ratio given the riskfree rate and expected returns and a covariance matrix """ n = er.shape[0] init_guess = np.repeat(1/n, n) bounds = ((0.0, 1.0),) * n an N-tuple of 2-tuples! construct the constraints weights_sum_to_1 = {'type': 'eq', 'fun': lambda weights: np.sum(weights) - 1 } def neg_sharpe(weights, riskfree_rate, er, cov): """ Returns the negative of the sharpe ratio of the given portfolio """ r = portfolio_return(weights, er) vol = portfolio_vol(weights, cov) return -(r - riskfree_rate)/vol weights = minimize(neg_sharpe, init_guess, args=(riskfree_rate, er, cov), method='SLSQP', options={'disp': False}, constraints=(weights_sum_to_1,), bounds=bounds) return weights.x```Let's guess where the point might be: | ax = erk.plot_ef(20, er, cov)
ax.set_xlim(left = 0)
# plot EF
ax = erk.plot_ef(20, er, cov)
ax.set_xlim(left = 0)
# get MSR
rf = 0.1
w_msr = erk.msr(rf, er, cov)
r_msr = erk.portfolio_return(w_msr, er)
vol_msr = erk.portfolio_vol(w_msr, cov)
# add CML
cml_x = [0, vol_msr]
cml_y = [rf, r_msr]
ax.plot(cml_x, cml_y, color='green', marker='o', linestyle='dashed', linewidth=2, markersize=12)
r_msr, vol_msr | _____no_output_____ | CNRI-Python-GPL-Compatible | Investment Management/Course1/lab_110.ipynb | djoye21school/python |
Let's put it all together by adding the CML to the `plot_ef` code.Add the following code:```python if show_cml: ax.set_xlim(left = 0) get MSR w_msr = msr(riskfree_rate, er, cov) r_msr = portfolio_return(w_msr, er) vol_msr = portfolio_vol(w_msr, cov) add CML cml_x = [0, vol_msr] cml_y = [riskfree_rate, r_msr] ax.plot(cml_x, cml_y, color='green', marker='o', linestyle='dashed', linewidth=2, markersize=12)``` | erk.plot_ef(20, er, cov, style='-', show_cml=True, riskfree_rate=0.1) | _____no_output_____ | CNRI-Python-GPL-Compatible | Investment Management/Course1/lab_110.ipynb | djoye21school/python |
Plotting the results of Man Of the Match Award in IPL 2008 - 2018 | player_names = list(player_of_match.keys())
number_of_times = list(player_of_match.values())
# Plotting the Graph
plt.bar(range(len(player_of_match)), number_of_times)
plt.title('Man Of the Match Award')
plt.show() | _____no_output_____ | MIT | .ipynb_checkpoints/IPL 2008 - 2018 Analysis-checkpoint.ipynb | srimani-programmer/IPL-Analysis |
Number Of Wins Of Each Team | teamWinCounts = dict()
for team in matches_dataset['winner']:
if team == None:
continue
else:
teamWinCounts[team] = teamWinCounts.get(team,0) + 1
for teamName, Count in teamWinCounts.items():
print(teamName,':',Count) | Sunrisers Hyderabad : 52
Rising Pune Supergiant : 10
Kolkata Knight Riders : 86
Kings XI Punjab : 76
Royal Challengers Bangalore : 79
Mumbai Indians : 98
Delhi Daredevils : 67
Gujarat Lions : 13
Chennai Super Kings : 90
Rajasthan Royals : 70
Deccan Chargers : 29
Pune Warriors : 12
Kochi Tuskers Kerala : 6
nan : 3
Rising Pune Supergiants : 5
| MIT | .ipynb_checkpoints/IPL 2008 - 2018 Analysis-checkpoint.ipynb | srimani-programmer/IPL-Analysis |
Plotting the Results Of Team Winning | numberOfWins = teamWinCounts.values()
teamName = teamWinCounts.keys()
plt.bar(range(len(teamWinCounts)), numberOfWins)
plt.xticks(range(len(teamWinCounts)), list(teamWinCounts.keys()), rotation='vertical')
plt.xlabel('Team Names')
plt.ylabel('Number Of Win Matches')
plt.title('Analysis Of Number Of Matches win by Each Team From 2008 - 2018', color="Orange")
plt.show() | _____no_output_____ | MIT | .ipynb_checkpoints/IPL 2008 - 2018 Analysis-checkpoint.ipynb | srimani-programmer/IPL-Analysis |
Total Matches Played by Each team From 2008 - 2018 | totalMatchesCount = dict()
# For Team1
for team in matches_dataset['team1']:
totalMatchesCount[team] = totalMatchesCount.get(team, 0) + 1
# For Team2
for team in matches_dataset['team2']:
totalMatchesCount[team] = totalMatchesCount.get(team, 0) + 1
# Printing the total matches played by each team
for teamName, count in totalMatchesCount.items():
print('{} : {}'.format(teamName,count)) | Sunrisers Hyderabad : 93
Mumbai Indians : 171
Gujarat Lions : 30
Rising Pune Supergiant : 16
Royal Challengers Bangalore : 166
Kolkata Knight Riders : 164
Delhi Daredevils : 161
Kings XI Punjab : 162
Chennai Super Kings : 147
Rajasthan Royals : 133
Deccan Chargers : 75
Kochi Tuskers Kerala : 14
Pune Warriors : 46
Rising Pune Supergiants : 14
| MIT | .ipynb_checkpoints/IPL 2008 - 2018 Analysis-checkpoint.ipynb | srimani-programmer/IPL-Analysis |
Plotting the Total Matches Played by Each Team | teamNames = totalMatchesCount.keys()
teamCount = totalMatchesCount.values()
plt.bar(range(len(totalMatchesCount)), teamCount)
plt.xticks(range(len(totalMatchesCount)), list(teamNames), rotation='vertical')
plt.xlabel('Team Names')
plt.ylabel('Number Of Played Matches')
plt.title('Total Number Of Matches Played By Each Team From 2008 - 2018')
plt.show() | _____no_output_____ | MIT | .ipynb_checkpoints/IPL 2008 - 2018 Analysis-checkpoint.ipynb | srimani-programmer/IPL-Analysis |
CO460 - Deep Learning - Lab exercise 3 IntroductionIn this exercise, you will develop and experiment with convolutional AEs (CAE) and VAEs (CVAE).You will be asked to:- experiment with the architectures and compare the convolutional models to the fully connected ones. - investigate and implement sampling and interpolation in the latent space. | import os
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from torchvision.utils import save_image
import torch.nn.functional as F
from utils import *
import matplotlib.pyplot as plt
import numpy as np
from utils import denorm_for_tanh, denorm_for_sigmoid | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Device selection | GPU = True
device_idx = 0
if GPU:
device = torch.device("cuda:"+str(device_idx) if torch.cuda.is_available() else "cpu")
else:
device = torch.device("cpu")
print(device) | cuda:0
| MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Reproducibility | # We set a random seed to ensure that your results are reproducible.
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
torch.manual_seed(0) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Part 1 - CAE Normalization: $ x_{norm} = \frac{x-\mu}{\sigma} $_Thus_ :$ \min{x_{norm}} = \frac{\min{(x)}-\mu}{\sigma} = \frac{0-0.5}{0.5} = -1 $_Similarly_:$ \max{(x_{norm})} = ... = 1 $* Input $\in [-1,1] $* Output should span the same interval $ \rightarrow$ Activation function of the output layer should be chosen carfeully (Here??) | transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
denorm = denorm_for_tanh
train_dat = datasets.MNIST(
"data/", train=True, download=True, transform=transform
)
test_dat = datasets.MNIST("data/", train=False, transform=transform) | Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Processing...
Done!
| MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Hyper-parameter selection | if not os.path.exists('./CAE'):
os.mkdir('./CAE')
num_epochs = 20
batch_size = 128
learning_rate = 1e-3 | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Define the dataloaders | train_loader = DataLoader(train_dat, batch_size, shuffle=True)
test_loader = DataLoader(test_dat, batch_size, shuffle=False)
it = iter(test_loader)
sample_inputs, _ = next(it)
fixed_input = sample_inputs[:32, :, :, :]
in_dim = fixed_input.shape[-1]*fixed_input.shape[-2]
save_image(fixed_input, './CAE/image_original.png') | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Define the model - CAEComplete the `encoder` and `decoder` methods in the CAE pipeline.To find an effective architecture, you can experiment with the following:- the number of convolutional layers- the kernels' sizes- the stride values- the size of the latent space layer | class CAE(nn.Module):
def __init__(self, latent_dim):
super(CAE, self).__init__()
"""
TODO: Define here the layers (convolutions, relu etc.) that will be
used in the encoder and decoder pipelines.
"""
def encode(self, x):
"""
TODO: Construct the encoder pipeline here. The encoder's
output will be the laten space representation of x.
"""
return x
def decode(self, z):
"""
TODO: Construct the decoder pipeline here. The decoder should
generate an output tensor with equal dimenssions to the
encoder's input tensor.
"""
return z
def forward(self, x):
x = self.encoder(x)
x = self.decoder(x)
return x
# Instantiate the model
latent_dim =
cv_AE = CAE(latent_dim=latent_dim) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Define Loss function | criterion = nn.L1Loss(reduction='sum') # can we use any other loss here?
def loss_function_CAE(recon_x, x):
recon_loss = criterion(recon_x, x)
return recon_loss | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Initialize Model and print number of parameters | model = cv_AE.to(device)
params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print("Total number of parameters is: {}".format(params)) # what would the number actually be?
print(model) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Choose and initialize optimizer | optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Train | model.train()
for epoch in range(num_epochs):
train_loss = 0
for batch_idx, data in enumerate(train_loader):
img, _ = data
img = img.to(device)
optimizer.zero_grad()
# forward
recon_batch = model(img)
loss = loss_function_CAE(recon_batch, img)
# backward
loss.backward()
train_loss += loss.item()
optimizer.step()
# print out losses and save reconstructions for every epoch
print('epoch [{}/{}], loss:{:.4f}'.format(epoch + 1, num_epochs, train_loss / len(train_loader.dataset)))
recon = denorm(model(fixed_input.to(device)))
save_image(recon, './CAE/reconstructed_epoch_{}.png'.format(epoch))
# save the model
torch.save(model.state_dict(), './CAE/model.pth') | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Test | # load the model
model.load_state_dict(torch.load("./CAE/model.pth"))
model.eval()
test_loss = 0
with torch.no_grad():
for i, (img, _) in enumerate(test_loader):
img = img.to(device)
recon_batch = model(img)
test_loss += loss_function_CAE(recon_batch, img)
# reconstruct and save the last batch
recon_batch = model(recon_batch.to(device))
img = denorm(img.cpu())
# save the original last batch
save_image(img, './CAE/test_original.png')
save_image(denorm(recon_batch.cpu()), './CAE/reconstructed_test.png')
# loss calculated over the whole test set
test_loss /= len(test_loader.dataset)
print('Test set loss: {:.4f}'.format(test_loss)) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Interpolations | # Define inpute tensors
x1 =
x2 =
# Create the latent representations
z1 = model.encode(x1)
z2 = model.encode(x2)
"""
TODO: Find a way to create interpolated results from the CAE.
"""
Z =
X_hat = model.decode(Z) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Part 2 - CVAE Normalization | transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
denorm = denorm_for_tanh
train_dat = datasets.MNIST(
"data/", train=True, download=True, transform=transform
)
test_dat = datasets.MNIST("data/", train=False, transform=transform) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Hyper-parameter selection | if not os.path.exists('./CVAE'):
os.mkdir('./CVAE')
num_epochs = 20
batch_size = 128
learning_rate = 1e-3 | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Define the dataloaders | train_loader = DataLoader(train_dat, batch_size, shuffle=True)
test_loader = DataLoader(test_dat, batch_size, shuffle=False)
it = iter(test_loader)
sample_inputs, _ = next(it)
fixed_input = sample_inputs[:32, :, :, :]
in_dim = fixed_input.shape[-1]*fixed_input.shape[-2]
save_image(fixed_input, './CVAE/image_original.png') | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Define the model - CVAEComplete the `encoder` and `decoder` methods in the CVAE pipeline.To find an effective architecture, you can experiment with the following:- the number of convolutional layers- the kernels' sizes- the stride values- the size of the latent space layer | class CVAE(nn.Module):
def __init__(self, latent_dim):
super(CVAE, self).__init__()
"""
TODO: Define here the layers (convolutions, relu etc.) that will be
used in the encoder and decoder pipelines.
"""
def encode(self, x):
"""
TODO: Construct the encoder pipeline here.
"""
return mu, logvar
def reparametrize(self, mu, logvar):
"""
TODO: Implement reparameterization here.
"""
return z
def decode(self, z):
"""
TODO: Construct the decoder pipeline here.
"""
return z
def forward(self, x):
mu, logvar = self.encode(x)
z = self.reparametrize(mu, logvar)
x_hat = self.decode(z)
return x_hat, mu, logvar
# Instantiate the model
latent_dim =
cv_VAE = CVAE(latent_dim =latent_dim) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Define Loss function | # Reconstruction + KL divergence losses summed over all elements and batch
def loss_function_VAE(recon_x, x, mu, logvar):
BCE = F.binary_cross_entropy(recon_x, x, size_average=False)
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return BCE + KLD | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Initialize Model and print number of parameters | model = cv_AE.to(device)
params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print("Total number of parameters is: {}".format(params)) # what would the number actually be?
print(model) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Choose and initialize optimizer | optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Train | model.train()
for epoch in range(num_epochs):
train_loss = 0
for batch_idx, data in enumerate(train_loader):
img, _ = data
img = img.to(device)
optimizer.zero_grad()
# forward
recon_batch = model(img)
loss = loss_function_CAE(recon_batch, img)
# backward
loss.backward()
train_loss += loss.item()
optimizer.step()
# print out losses and save reconstructions for every epoch
print('epoch [{}/{}], loss:{:.4f}'.format(epoch + 1, num_epochs, train_loss / len(train_loader.dataset)))
recon = denorm(model(fixed_input.to(device)))
save_image(recon, './CVAE/reconstructed_epoch_{}.png'.format(epoch))
# save the model
torch.save(model.state_dict(), './CVAE/model.pth') | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Test | # load the model
model.load_state_dict(torch.load("./CVAE/model.pth"))
model.eval()
test_loss = 0
with torch.no_grad():
for i, (img, _) in enumerate(test_loader):
img = img.to(device)
recon_batch = model(img)
test_loss += loss_function_CAE(recon_batch, img)
# reconstruct and save the last batch
recon_batch = model(recon_batch.to(device))
img = denorm(img.cpu())
# save the original last batch
save_image(img, './CVAE/test_original.png')
save_image(denorm(recon_batch.cpu()), './CVAE/reconstructed_test.png')
# loss calculated over the whole test set
test_loss /= len(test_loader.dataset)
print('Test set loss: {:.4f}'.format(test_loss)) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Sample Sample the latent space and use the `decoder` to generate resutls. | model.load_state_dict(torch.load("./CVAE/model.pth"))
model.eval()
with torch.no_grad():
"""
TODO: Investigate how to sample the latent space of the CVAE.
"""
z =
sample = model.decode(z)
save_image(denorm(sample).cpu(), './CVAE/samples_' + '.png') | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Interpolations | # Define inpute tensors
x1 =
x2 =
# Create the latent representations
z1 = model.encode(x1)
z2 = model.encode(x2)
"""
TODO: Find a way to create interpolated results from the CVAE.
"""
Z =
X_hat = model.decode(Z) | _____no_output_____ | MIT | AE_VAE_CAE_CVAE/LabExercise3.ipynb | quantumiracle/Course_Code |
Fit models code | def AIC(log_likelihood, k):
""" AIC given log_likelihood and # parameters (k)
"""
aic = 2 * k - 2 * log_likelihood
return aic
def BIC(log_likelihood, n, k):
""" BIC given log_likelihood, number of observations (n) and # parameters (k)
"""
bic = np.log(n) * k - 2 * log_likelihood
return bic
def FOMM(seqs, prop_test=0.5):
""" create a FOMM in pomegranite
"""
if prop_test == 0:
seqs_train = seqs_test = seqs
else:
# split into train and test for cross validation
training_mask = np.random.choice(
np.arange(len(seqs)), size=int(len(seqs) * prop_test), replace=False
)
testing_mask = np.array(
[i for i in np.arange(len(seqs)) if i not in training_mask]
)
seqs_train = np.array(seqs)[training_mask]
seqs_test = np.array(seqs)[testing_mask]
# make sure test set doesn't contain any data that train doesnt
assert np.all(
[
i in np.unique(np.concatenate(seqs_train))
for i in np.unique(np.concatenate(seqs_test))
]
)
# lengths of sequences
seq_lens = [len(i) for i in seqs_train]
# get states
unique_states = np.unique(np.concatenate(seqs_train))
# get start probabilities
seq_starts = np.array([i[0] for i in seqs_train])
start_probs = [np.sum(seq_starts == i) / len(seqs_train) for i in unique_states]
end_states = [seq[-1] for seq in seqs]
end_probs = [
np.sum(end_states == i) / (np.sum(np.concatenate(seqs) == i) + 1)
for i in np.arange(len(unique_states))
]
# transition probs
trans_mat = np.zeros((len(unique_states), len(unique_states)))
for seq in seqs_train:
for i, j in zip(seq[:-1], seq[1:]):
trans_mat[i, j] += 1
# smooth to nonzero probabilities
trans_mat = (trans_mat.T / trans_mat.sum(axis=1)).T # np.sum(trans_mat, axis=1)
# smooth emissions
emission_prob = np.identity(len(unique_states)) + 1e-5
emission_prob = (emission_prob.T / emission_prob.sum(axis=1)).T
# number of datapoints
test_seq_lens = [len(i) for i in seqs_test]
n_data = np.sum(test_seq_lens)
# initialize pomegranate model
transmat = trans_mat
start_probs = start_probs
dists = emission_prob
states = [
DiscreteDistribution({vis: d[i] for i, vis in enumerate(unique_states)})
for d in dists
]
pom_model = HiddenMarkovModel.from_matrix(
transition_probabilities=transmat,
distributions=states,
starts=start_probs,
ends=end_probs, # discluding ends and merge makes models equal log prob
merge="None",
)
pom_model.bake()
pom_log_probability = np.sum([pom_model.log_probability(seq) for seq in seqs_test])
# number of params in model
num_params = (
pom_model.edge_count() + pom_model.node_count() + pom_model.state_count() # no hidden states in FOMM
)
# AIC and BIC
aic = AIC(pom_log_probability, num_params)
bic = BIC(pom_log_probability, n_data, num_params)
return (
pom_model,
seqs_train,
seqs_test,
pom_log_probability,
num_params,
n_data,
aic,
bic,
)
def fit_fixed_latent(seqs, latent_seqs, verbose=False):
unique_latent_labels = np.unique(np.concatenate(latent_seqs))
n_components = len(unique_latent_labels)
# convert latent sequences to correct format
label_seqs_str = [
["None-start"] + ["s" + str(i) for i in seq] + ["None-end"]
for seq in latent_seqs
]
pom_model = HiddenMarkovModel.from_samples(
distribution=DiscreteDistribution,
n_components=len(unique_latent_labels),
X=seqs,
labels=label_seqs_str,
end_state=True,
algorithm="labeled",
verbose=verbose,
)
log_prob = [pom_model.log_probability(seq) for seq in seqs]
sum_log_prob = np.sum(log_prob)
num_params = (
pom_model.state_count() + pom_model.edge_count() + pom_model.node_count()
)
n_data = np.sum([len(i) for i in seqs])
aic = AIC(sum_log_prob, num_params)
bic = BIC(sum_log_prob, n_data, num_params)
return pom_model, log_prob, sum_log_prob, n_components, num_params, n_data, aic, bic
DATASET_ID = 'koumura_bengalese_finch'
embeddings_dfs = list(DATA_DIR.glob('bf_label_dfs/'+DATASET_ID+'/*.pickle'))
DATASET_ID = 'bengalese_finch_sober'
embeddings_dfs = embeddings_dfs + list(DATA_DIR.glob('bf_label_dfs/'+DATASET_ID+'/*.pickle'))
embeddings_dfs
for loc in tqdm(embeddings_dfs):
# read dataframe
indv_df = pd.read_pickle(loc).sort_values(by=["key", "start_time"])
indv = indv_df.indv.unique()[0]
# Get seqs
hand_seqs = [
list(indv_df[indv_df.syllables_sequence_id == seqid]["labels_num"].values)
for seqid in indv_df.syllables_sequence_id.unique()
]
results_df_FOMM = pd.DataFrame(
[FOMM(hand_seqs, prop_test=0)],
columns=[
"pom_model",
"seqs_train",
"seqs_test",
"pom_log_probability",
"n_params",
"n_data",
"aic",
"bic",
],
)
results_df_FOMM["indv"] = indv
save_loc = DATA_DIR / "HMM_fits" / "FOMM" / (indv + ".pickle")
ensure_dir(save_loc)
results_df_FOMM.to_pickle(save_loc)
### HDBSCAN as latent
# HDBSCAN seqs
for hdbscan_labels in ["hdbscan_labels_num", "hdbscan_labels-0.1_num", "hdbscan_labels-0.25_num"]:
hdbscan_latent_seqs = [
list(
indv_df[indv_df.syllables_sequence_id == seqid][hdbscan_labels].values
)
for seqid in indv_df.syllables_sequence_id.unique()
]
# make latent df
results_df_umap_hidden = pd.DataFrame(
[fit_fixed_latent(hand_seqs, hdbscan_latent_seqs, verbose=False)],
columns=[
"pom_model",
"log_prob",
"sum_log_prob",
"n_components",
"num_params",
"n_data",
"aic",
"bic",
],
)
results_df_umap_hidden["indv"] = indv
save_loc = DATA_DIR / "HMM_fits" / hdbscan_labels / "HDBSCAN" / (indv + ".pickle")
ensure_dir(save_loc)
results_df_umap_hidden.to_pickle(save_loc)
### second order model
seqs_second_order_states = [
list(
indv_df[indv_df.syllables_sequence_id == seqid][
"seqs_second_order_states"
].values
)
for seqid in indv_df.syllables_sequence_id.unique()
]
results_df_second_order_hidden = pd.DataFrame(
[fit_fixed_latent(hand_seqs, seqs_second_order_states, verbose=False)],
columns=[
"pom_model",
"log_prob",
"sum_log_prob",
"n_components",
"num_params",
"n_data",
"aic",
"bic",
],
)
results_df_second_order_hidden["indv"] = indv
save_loc = DATA_DIR / "SOMM" / (indv + ".pickle")
ensure_dir(save_loc)
results_df_second_order_hidden.to_pickle(save_loc)
print(
"---{}---\nAIC: \n\tSOMM: {}\n\tFOMM: {} \n\tHDBSCAN: {} \nLL: \n\tSOMM: {}\n\tFOMM: {} \n\tHDBSCAN: {}".format(
indv,
round(results_df_second_order_hidden.aic.values[0]),
round(results_df_umap_hidden.aic.values[0]),
round(results_df_FOMM.aic.values[0]),
round(results_df_second_order_hidden.sum_log_prob.values[0]),
round(results_df_umap_hidden.sum_log_prob.values[0]),
round(results_df_FOMM.pom_log_probability.values[0]),
)
)
DATA_DIR | _____no_output_____ | MIT | notebooks/0.31-compare-sequence-models-bf/0.2-bf-FOMM-SOMM-HDBSCAN-latent-models.ipynb | xingjeffrey/avgn_paper |
Non-Parametric Tests Part IUp until now, you've been using standard hypothesis tests on means of normal distributions to design and analyze experiments. However, it's possible that you will encounter scenarios where you can't rely on only standard tests. This might be due to uncertainty about the true variability of a metric's distribution, a lack of data to assume normality, or wanting to do inference on a statistic that lacks a standard test. It's useful to know about some **non-parametric tests** not just as a workaround for cases like this, but also as a second check on your experimental results. The main benefit of a non-parametric test is that they don't rely on many assumptions of the underlying population, and so can be used in a wider range of circumstances compared to standard tests. In this notebook, you'll cover two non-parametric approaches that use resampling of the data to make inferences about distributions and differences. | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline | _____no_output_____ | MIT | original_notebooks/L2_Non-Parametric_Tests_Part_1_Solution.ipynb | epasseto/ThirdProjectStudies |
BootstrappingBootstrapping is used to estimate sampling distributions by using the actually collected data to generate new samples that could have been hypothetically collected. In a standard bootstrap, a bootstrapped sample means drawing points from the original data _with replacement_ until we get as many points as there were in the original data. Essentially, we're treating the original data as the population: without making assumptions about the original population distribution, using the original data as a model of the population is the best that we can do.Taking a lot of bootstrapped samples allows us to estimate the sampling distribution for various statistics on our original data. For example, let's say that we wanted to create a 95% confidence interval for the 90th percentile from a dataset of 5000 data points. (Perhaps we're looking at website load times and want to reduce the worst cases.) Bootstrapping makes this easy to estimate. First of all, we take a bootstrap sample (i.e. draw 5000 points with replacement from the original data) and record the 90th percentile and repeat this a large number of times, let's say 100 000. From this bunch of bootstrapped 90th percentile estimates, we form our confidence interval by finding the values that capture the central 95% of the estimates (cutting off 2.5% on each tail). Implement this operation in the cells below, using the following steps:- Initialize some useful variables by storing the number of data points in `n_points` and setting up an empty list for the bootstrapped quantile values in `sample_qs`.- Create a loop for each trial where: - First generate a bootstrap sample by sampling from our data with replacement. ([`random.choice`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html) will be useful here.) - Then, compute the `q`th quantile of the sample and add it to the `sample_qs` list. If you're using numpy v0.15 or later, you can use the [`quantile`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.quantile.html) function to get the quantile directly with `q`; on v0.14 or earlier, you'll need to put `q` in terms of a percentile and use [`percentile`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.percentile.html) instead.- After gathering the bootstrapped quantiles, find the limits that capture the central `c` proportion of quantiles to form the estimated confidence interval. | def quantile_ci(data, q, c = .95, n_trials = 1000):
"""
Compute a confidence interval for a quantile of a dataset using a bootstrap
method.
Input parameters:
data: data in form of 1-D array-like (e.g. numpy array or Pandas series)
q: quantile to be estimated, must be between 0 and 1
c: confidence interval width
n_trials: number of bootstrap samples to perform
Output value:
ci: Tuple indicating lower and upper bounds of bootstrapped
confidence interval
"""
# initialize storage of bootstrapped sample quantiles
n_points = data.shape[0]
sample_qs = []
# For each trial...
for _ in range(n_trials):
# draw a random sample from the data with replacement...
sample = np.random.choice(data, n_points, replace = True)
# compute the desired quantile...
sample_q = np.percentile(sample, 100 * q)
# and add the value to the list of sampled quantiles
sample_qs.append(sample_q)
# Compute the confidence interval bounds
lower_limit = np.percentile(sample_qs, (1 - c)/2 * 100)
upper_limit = np.percentile(sample_qs, (1 + c)/2 * 100)
return (lower_limit, upper_limit)
data = pd.read_csv('../data/bootstrapping_data.csv')
data.head(10)
# data visualization
plt.hist(data['time'], bins = np.arange(0, data['time'].max()+400, 400));
lims = quantile_ci(data['time'], 0.9)
print(lims) | _____no_output_____ | MIT | original_notebooks/L2_Non-Parametric_Tests_Part_1_Solution.ipynb | epasseto/ThirdProjectStudies |
Bootstrapping NotesConfidence intervals coming from the bootstrap procedure will be optimistic compared to the true state of the world. This is because there will be things that we don't know about the real world that we can't account for, due to not having a parametric model of the world's state. Consider the extreme case of trying to understand the distribution of the maximum value: our confidence interval would never be able to include any value greater than the largest observed value and it makes no sense to have any lower bound below the maximum observation. Intuitively, however, there's a pretty clear possibility for there to be unobserved values that are larger than the one we've observed, especially for skewed data like shown in the example.This doesn't override the bootstrap method's advantages, however. The bootstrap procedure is fairly simple and straightforward. Since you don't make assumptions about the distribution of data, it can be applicable for any case you encounter. The results should also be fairly comparable to standard tests. But it does take computational effort, and its output does depend on the data put in. For reference, for the 95% CI on the 90th percentile example explored above, the inferred interval would only capture about 83% of 90th percentiles from the original generating distribution. But a more intricate procedure using a binomial assumption to index on the observed data only does about one percentage point better (84%). And both of these depend on the specific data generated: a different set of 5000 points will produce different intervals, with different accuracies.Binomial solution for percentile CIs reference: [1](https://www-users.york.ac.uk/~mb55/intro/cicent.htm), [2](https://stats.stackexchange.com/questions/99829/how-to-obtain-a-confidence-interval-for-a-percentile) Permutation TestsThe permutation test is a resampling-type test used to compare the values on an outcome variable between two or more groups. In the case of the permutation test, resampling is done on the group labels. The idea here is that, under the null hypothesis, the outcome distribution should be the same for all groups, whether control or experimental. Thus, we can emulate the null by taking all of the data values as a single large group. Applying labels randomly to the data points (while maintaining the original group membership ratios) gives us one simulated outcome from the null.The rest follows similar to the sampling approach to a standard hypothesis test, except that we haven't specified a reference distribution to sample from – we're sampling directly from the data we've collected. After applying the labels randomly to all the data and recording the outcome statistic many times, we compare our actual, observed statistic against the simulated statistics. A p-value is obtained by seeing how many simulated statistic values are as or more extreme as the one actually observed, and a conclusion is then drawn.Try implementing a permutation test in the cells below to test if the 90th percentile of times is staistically significantly smaller for the experimental group, as compared to the control group:- Initialize an empty list to store the difference in sample quantiles as `sample_diffs`.- Create a loop for each trial where: - First generate a permutation sample by randomly shuffling the data point labels. ([`random.permutation`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.permutation.html) will be useful here.) - Then, compute the `q`th quantile of the data points that have been assigned to each group based on the permuted labels. Append the difference in quantiles to the `sample_diffs` list.- After gathering the quantile differences for permuted samples, compute the observed difference for the actual data. Then, compute a p-value from the number of permuted sample differences that are less than or greater than the observed difference, depending on the desired alternative hypothesis. | def quantile_permtest(x, y, q, alternative = 'less', n_trials = 10_000):
"""
Compute a confidence interval for a quantile of a dataset using a bootstrap
method.
Input parameters:
x: 1-D array-like of data for independent / grouping feature as 0s and 1s
y: 1-D array-like of data for dependent / output feature
q: quantile to be estimated, must be between 0 and 1
alternative: type of test to perform, {'less', 'greater'}
n_trials: number of permutation trials to perform
Output value:
p: estimated p-value of test
"""
# initialize storage of bootstrapped sample quantiles
sample_diffs = []
# For each trial...
for _ in range(n_trials):
# randomly permute the grouping labels
labels = np.random.permutation(y)
# compute the difference in quantiles
cond_q = np.percentile(x[labels == 0], 100 * q)
exp_q = np.percentile(x[labels == 1], 100 * q)
# and add the value to the list of sampled differences
sample_diffs.append(exp_q - cond_q)
# compute observed statistic
cond_q = np.percentile(x[y == 0], 100 * q)
exp_q = np.percentile(x[y == 1], 100 * q)
obs_diff = exp_q - cond_q
# compute a p-value
if alternative == 'less':
hits = (sample_diffs <= obs_diff).sum()
elif alternative == 'greater':
hits = (sample_diffs >= obs_diff).sum()
return (hits / n_trials)
data = pd.read_csv('../data/permutation_data.csv')
data.head(10)
# data visualization
bin_borders = np.arange(0, data['time'].max()+400, 400)
plt.hist(data[data['condition'] == 0]['time'], alpha = 0.5, bins = bin_borders)
plt.hist(data[data['condition'] == 1]['time'], alpha = 0.5, bins = bin_borders)
plt.legend(labels = ['control', 'experiment']);
# Just how different are the two distributions' 90th percentiles?
print(np.percentile(data[data['condition'] == 0]['time'], 90),
np.percentile(data[data['condition'] == 1]['time'], 90))
quantile_permtest(data['time'], data['condition'], 0.9,
alternative = 'less') | _____no_output_____ | MIT | original_notebooks/L2_Non-Parametric_Tests_Part_1_Solution.ipynb | epasseto/ThirdProjectStudies |
ainda tem mta diferença a predição tentar com menos variáveis. |
sns.heatmap(data_tratada.corr(),annot=True)
X = data_tratada[['CRIM', 'NOX', 'RM','LSTAT']]
y = data_tratada[['Price']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=66)
lm = LinearRegression()
lm.fit(X_train,y_train)
predictions = lm.predict(X_test)
plot.scatter(y_test,predictions)
sns.distplot((y_test-predictions),bins=50);
#RESIDUO, y_test - predição.
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions))) | MAE: 3.25779091361
MSE: 19.0984201753
RMSE: 4.37017392964
| MIT | Machine Learning/Linear-Regression/Boston DataFrame2.ipynb | wagneralbjr/Python_data_science-bootcamp_udemy |
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Tutorial-IllinoisGRMHD: harm_utoprim_2d.c Authors: Leo Werneck & Zach Etienne**This module is currently under development** In this tutorial module we explain the conservative-to-primitive algorithm used by `HARM`. This module will likely be absorbed by another one once we finish documenting the code. Required and recommended citations:* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)). Table of Contents$$\label{toc}$$This module is organized as follows0. [Step 0](src_dir): **Source directory creation**1. [Step 1](introduction): **Introduction**1. [Step 2](harm_utoprim_2d__c__eos_indep): **EOS independent routines** 1. [Step 2.a](utoprim_2d): *The `Utoprim_2d()` function* 1. [Step 2.a.i](utoprim_2d__bi_and_alpha): Setting $B^{i}_{\rm HARM}$ and $\alpha$ 1. [Step 2.a.ii](utoprim_2d__converting): Preparing the variables to be used by the `Utoprim_new_body()` function 1. [Step 2.b](utoprim_new_body): *The `Utoprim_new_body()` function* 1. [Step 2.b.i](utoprim_new_body__basic_quantities): Computing basic quantities 1. [Step 2.b.ii](utoprim_new_body__wlast): Determining $W$ from the previous iteration, $W_{\rm last}$ 1. [Step 2.b.iii](utoprim_new_body__vsqlast_and_recompute_w_and_vsq): Compute $v^{2}_{\rm last}$, then update $v^{2}$ and $W$ 1. [Step 2.b.iv](utoprim_new_body__compute_prims): Computing the primitive variables 1. [Step 2.c](vsq_calc): *The `vsq_calc()` function* 1. [Step 2.d](x1_of_x0): *The `x1_of_x0()` function* 1. [Step 2.e](validate_x): *The `validate_x()` function* 1. [Step 2.f](general_newton_raphson): *The `general_newton_raphson()` function* 1. [Step 2.g](func_vsq): *The `func_vsq()` function*1. [Step 3](harm_utoprim_2d__c__eos_dep): **EOS dependent routines** 1. [Step 3.a](pressure_w_vsq): *The `pressure_W_vsq()` function* 1. [Step 3.b](dpdw_calc_vsq): *The `dpdW_calc_vsq()` function* 1. [Step 3.c](dpdvsq_calc): *The `dpdvsq_calc()` function* 1. [Step 3.c.i](dpdvsq_calc__basic_quantities): Setting basic quantities and computing $P_{\rm cold}$ and $\epsilon_{\rm cold}$ 1. [Step 3.c.ii](dpdvsq_calc__dpcolddvsq): Computing $\frac{\partial P_{\rm cold}}{\partial\left(v^{2}\right)}$ 1. [Step 3.c.iii](dpdvsq_calc__depscolddvsq): Computing $\frac{\partial \epsilon_{\rm cold}}{\partial\left(v^{2}\right)}$ 1. [Step 3.c.iv](dpdvsq_calc__dpdvsq): Computing $\frac{\partial p_{\rm hybrid}}{\partial\left(v^{2}\right)}$1. [Step 4](code_validation): **Code validation**1. [Step 5](latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file** Step 0: Source directory creation \[Back to [top](toc)\]$$\label{src_dir}$$We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet. | # Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__harm_utoprim_2d__c = os.path.join(IGM_src_dir_path,"harm_utoprim_2d.c") | _____no_output_____ | BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 1: Introduction \[Back to [top](toc)\]$$\label{introduction}$$Comment on license: `HARM` uses GPL, while `IllinoisGRMHD` uses BSD. Step 2: EOS independent routines \[Back to [top](toc)\]$$\label{harm_utoprim_2d__c__eos_indep}$$Let us now start documenting the `harm_utoprim_2d.c`, which is a part of the `Harm` code. Our main reference throughout this discussion will be the required citation [Noble *et al.* (2006)](https://arxiv.org/abs/astro-ph/0512420).We will start with the code's required preamble. | %%writefile $outfile_path__harm_utoprim_2d__c
#ifndef __HARM_UTOPRIM_2D__C__
#define __HARM_UTOPRIM_2D__C__
/***********************************************************************************
Copyright 2006 Charles F. Gammie, Jonathan C. McKinney, Scott C. Noble,
Gabor Toth, and Luca Del Zanna
HARM version 1.0 (released May 1, 2006)
This file is part of HARM. HARM is a program that solves hyperbolic
partial differential equations in conservative form using high-resolution
shock-capturing techniques. This version of HARM has been configured to
solve the relativistic magnetohydrodynamic equations of motion on a
stationary black hole spacetime in Kerr-Schild coordinates to evolve
an accretion disk model.
You are morally obligated to cite the following two papers in his/her
scientific literature that results from use of any part of HARM:
[1] Gammie, C. F., McKinney, J. C., \& Toth, G.\ 2003,
Astrophysical Journal, 589, 444.
[2] Noble, S. C., Gammie, C. F., McKinney, J. C., \& Del Zanna, L. \ 2006,
Astrophysical Journal, 641, 626.
Further, we strongly encourage you to obtain the latest version of
HARM directly from our distribution website:
http://rainman.astro.uiuc.edu/codelib/
HARM is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
HARM is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with HARM; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************************/
/*************************************************************************************/
/*************************************************************************************/
/*************************************************************************************
utoprim_2d.c:
---------------
Uses the 2D method:
-- solves for two independent variables (W,v^2) via a 2D
Newton-Raphson method
-- can be used (in principle) with a general equation of state.
-- Currently returns with an error state (>0) if a negative rest-mass
density or internal energy density is calculated. You may want
to change this aspect of the code so that it still calculates the
velocity and so that you can floor the densities. If you want to
change this aspect of the code please comment out the "return(retval)"
statement after "retval = 5;" statement in Utoprim_new_body();
******************************************************************************/
static const int NEWT_DIM=2;
// Declarations:
static CCTK_REAL vsq_calc(CCTK_REAL W,CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D);
static int Utoprim_new_body(eos_struct eos, CCTK_REAL U[], CCTK_REAL gcov[NDIM][NDIM], CCTK_REAL gcon[NDIM][NDIM], CCTK_REAL gdet, CCTK_REAL prim[],long &n_iter);
static int general_newton_raphson( eos_struct eos, CCTK_REAL x[], int n, long &n_iter, void (*funcd) (eos_struct, CCTK_REAL [], CCTK_REAL [], CCTK_REAL [], CCTK_REAL [][NEWT_DIM], CCTK_REAL *, CCTK_REAL *, int,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &),CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D);
static void func_vsq( eos_struct eos, CCTK_REAL [], CCTK_REAL [], CCTK_REAL [], CCTK_REAL [][NEWT_DIM], CCTK_REAL *f, CCTK_REAL *df, int n,CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D);
static CCTK_REAL x1_of_x0(CCTK_REAL x0, CCTK_REAL &Bsq, CCTK_REAL &QdotBsq, CCTK_REAL &Qtsq, CCTK_REAL &Qdotn, CCTK_REAL &D ) ;
static CCTK_REAL pressure_W_vsq(eos_struct eos, CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D) ;
static CCTK_REAL dpdW_calc_vsq(CCTK_REAL W, CCTK_REAL vsq);
static CCTK_REAL dpdvsq_calc(eos_struct eos, CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D);
/**********************************************************************/
/******************************************************************
Utoprim_2d():
-- Driver for new prim. var. solver. The driver just translates
between the two sets of definitions for U and P. The user may
wish to alter the translation as they see fit. Note that Greek
indices run 0,1,2,3 and Latin indices run 1,2,3 (spatial only).
/ rho u^t \
U = | T^t_t + rho u^t | sqrt(-det(g_{\mu\nu}))
| T^t_i |
\ B^i /
/ rho \
P = | uu |
| \tilde{u}^i |
\ B^i /
Arguments:
U[NPR] = conserved variables (current values on input/output);
gcov[NDIM][NDIM] = covariant form of the metric ;
gcon[NDIM][NDIM] = contravariant form of the metric ;
gdet = sqrt( - determinant of the metric) ;
prim[NPR] = primitive variables (guess on input, calculated values on
output if there are no problems);
-- NOTE: for those using this routine for special relativistic MHD and are
unfamiliar with metrics, merely set
gcov = gcon = diag(-1,1,1,1) and gdet = 1. ;
******************************************************************/ | Writing ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.a: The `Utoprim_2d()` function \[Back to [top](toc)\]$$\label{utoprim_2d}$$The `Utoprim_2d()` function is the driver function of the `HARM` conservative-to-primitive algorithm. We remind you from the definitions of primitive and conservative variables used in the code:$$\begin{align}\boldsymbol{P}_{\rm HARM} &= \left\{\rho_{b},u,\tilde{u}^{i},B^{i}_{\rm HARM}\right\}\ ,\\\boldsymbol{C}_{\rm HARM} &= \left\{\sqrt{-g}\rho_{b}u^{0},\sqrt{-g}\left(T^{0}_{\ 0}+\rho_{b}u^{0}\right),\sqrt{-g}T^{0}_{\ i},\sqrt{-g}B^{i}_{\rm HARM}\right\}\ .\end{align}$$ Step 2.a.i: Setting $B^{i}_{\rm HARM}$ and $\alpha$ \[Back to [top](toc)\]$$\label{utoprim_2d__bi_and_alpha}$$Let$$\tilde{B}^{i}_{\rm HARM} \equiv \sqrt{-g}B^{i}_{\rm HARM}\ .$$The code starts by relating$$\boxed{B^{i}_{\rm HARM} = \frac{\tilde{B}^{i}_{\rm HARM}}{\sqrt{-g}}}\ ,$$and setting$$\boxed{\alpha = \frac{1}{\sqrt{-g^{00}}}} \ .$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
int Utoprim_2d(eos_struct eos, CCTK_REAL U[NPR], CCTK_REAL gcov[NDIM][NDIM], CCTK_REAL gcon[NDIM][NDIM],
CCTK_REAL gdet, CCTK_REAL prim[NPR], long &n_iter)
{
CCTK_REAL U_tmp[NPR], prim_tmp[NPR];
int i, ret;
CCTK_REAL alpha;
if( U[0] <= 0. ) {
return(-100);
}
/* First update the primitive B-fields */
for(i = BCON1; i <= BCON3; i++) prim[i] = U[i] / gdet ;
/* Set the geometry variables: */
alpha = 1.0/sqrt(-gcon[0][0]); | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.a.ii: Preparing the variables to be used by the `Utoprim_new_body()` function \[Back to [top](toc)\]$$\label{utoprim_2d__converting}$$The conservative-to-primitive algorithm uses the `Utoprim_new_body()` function. However, this function assumes a *different* set of primitive/conservative variables. Thus, we must perform the proper conversion. First, let us ease on the notation a bit by defining:$$\boldsymbol{C} \equiv \left\{\rho_{\star},u_{\star},\tilde{S}_{i},\tilde{B}^{i}_{\rm HARM}\right\} \equiv \left\{\sqrt{-g}\rho_{b}u^{0},\sqrt{-g}\left(T^{0}_{\ 0}+\rho_{b}u^{0}\right),\sqrt{-g}T^{0}_{\ i},\sqrt{-g}B^{i}_{\rm HARM}\right\}\ .$$Below we list the main differences in the conservative variables:| `Utoprim_2d()` | `Utoprim_new_body()` ||------------------------------------------|---------------------------------------------------------------------------|| $\color{blue}{\textbf{Conservatives}}$ | $\color{red}{\textbf{Conservatives}}$ || $\color{blue}{\rho_{\star}}$ | $\color{red}{\frac{\alpha}{\sqrt{-g}}\rho_{\star}}$ || $\color{blue}{u_{\star}}$ | $\color{red}{\frac{\alpha}{\sqrt{-g}}\left(u_{\star}-\rho_{\star}\right)}$|| $\color{blue}{\tilde{S}_{i}}$ | $\color{red}{\frac{\alpha}{\sqrt{-g}}\tilde{S}_{i}}$ || $\color{blue}{\tilde{B}^{i}_{\rm HARM}}$ | $\color{red}{\frac{\alpha}{\sqrt{-g}}\tilde{B}^{i}_{\rm HARM}}$ |These are necessary conversions because while `Utoprim_2d()` assumes the set of conservatives above, `Utoprim_new_body()` assumes$$\left\{\gamma\rho_{b},\alpha T^{0}_{\ \ 0}, \alpha T^{0}_{\ \ i}, \alpha B^{i}_{\rm HARM}\right\}\ .$$Let us first pause to understand the table above. From definition (15) in [Noble *et al.* (2006)](https://arxiv.org/abs/astro-ph/0512420) and the discussion just below it, we know that $\gamma = \alpha u^{0}$. Thus$$\rho_{\star} = \sqrt{-g}\rho_{b}u^{0} = \sqrt{-g}\left(\frac{\gamma}{\alpha}\rho_{b}\right)\implies\boxed{\gamma \rho_{b} = \frac{\alpha}{\sqrt{-g}}\rho_{\star}}\ .$$Then we have$$u_{\star} = \sqrt{-g}\left(T^{0}_{\ \ 0} + \rho_{b}u^{0}\right)= \sqrt{-g}\left(T^{0}_{\ \ 0} + \frac{\rho_{\star}}{\sqrt{-g}}\right) = \sqrt{-g}T^{0}_{\ \ 0} + \rho_{\star} \implies \boxed{\alpha T^{0}_{\ \ 0} = \frac{\alpha}{\sqrt{-g}}\left(u_{\star}-\rho_{\star}\right)}\ .$$The other two relations are more straightforward. We have$$\tilde{S}_{i} = \sqrt{-g}T^{0}_{\ \ i} \implies \boxed{\alpha T^{0}_{\ \ i} = \frac{\alpha}{\sqrt{-g}}\tilde{S}_{i}}\ ,$$and$$\tilde{B}^{i}_{\rm HARM} = \sqrt{-g}B^{i}_{\rm HARM}\implies \boxed{\alpha B^{i}_{\rm HARM} = \frac{\alpha}{\sqrt{-g}}\tilde{B}^{i}_{\rm HARM}}\ .$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
/* Transform the CONSERVED variables into the new system */
U_tmp[RHO] = alpha * U[RHO] / gdet;
U_tmp[UU] = alpha * (U[UU] - U[RHO]) / gdet ;
for( i = UTCON1; i <= UTCON3; i++ ) {
U_tmp[i] = alpha * U[i] / gdet ;
}
for( i = BCON1; i <= BCON3; i++ ) {
U_tmp[i] = alpha * U[i] / gdet ;
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Below we list the necessary transformations on the primitive variables:| `Utoprim_2d()` | `Utoprim_new_body()` ||-------------------------------------|----------------------------------------|| $\color{blue}{\textbf{Primitives}}$ | $\color{red}{\textbf{Primitives}}$ || $\color{blue}{\rho_{b}}$ | $\color{red}{\rho_{b}}$ || $\color{blue}{u}$ | $\color{red}{u}$ || $\color{blue}{\tilde{u}^{i}}$ | $\color{red}{\tilde{u}^{i}}$ || $\color{blue}{B^{i}_{\rm HARM}}$ | $\color{red}{\alpha B^{i}_{\rm HARM}}$ |After this slight modification we call the `Utoprim_new_body()` function. If it returns without errors, than the variables ${\rm prim\_tmp}$ will now contain the values of the primitives. We then update the ${\rm prim}$ variables with these newly computed values. | %%writefile -a $outfile_path__harm_utoprim_2d__c
/* Transform the PRIMITIVE variables into the new system */
for( i = 0; i < BCON1; i++ ) {
prim_tmp[i] = prim[i];
}
for( i = BCON1; i <= BCON3; i++ ) {
prim_tmp[i] = alpha*prim[i];
}
ret = Utoprim_new_body(eos, U_tmp, gcov, gcon, gdet, prim_tmp,n_iter);
/* Transform new primitive variables back if there was no problem : */
if( ret == 0 || ret == 5 || ret==101 ) {
for( i = 0; i < BCON1; i++ ) {
prim[i] = prim_tmp[i];
}
}
return( ret ) ;
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.b: The `Utoprim_new_body()` function \[Back to [top](toc)\]$$\label{utoprim_new_body}$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
/**********************************************************************/
/**********************************************************************************
Utoprim_new_body():
-- Attempt an inversion from U to prim using the initial guess prim.
-- This is the main routine that calculates auxiliary quantities for the
Newton-Raphson routine.
-- assumes that
/ rho gamma \
U = | alpha T^t_\mu |
\ alpha B^i /
/ rho \
prim = | uu |
| \tilde{u}^i |
\ alpha B^i /
return: (i*100 + j) where
i = 0 -> Newton-Raphson solver either was not called (yet or not used)
or returned successfully;
1 -> Newton-Raphson solver did not converge to a solution with the
given tolerances;
2 -> Newton-Raphson procedure encountered a numerical divergence
(occurrence of "nan" or "+/-inf" ;
j = 0 -> success
1 -> failure: some sort of failure in Newton-Raphson;
2 -> failure: utsq<0 w/ initial p[] guess;
3 -> failure: W<0 or W>W_TOO_BIG
4 -> failure: v^2 > 1
5 -> failure: rho,uu <= 0 ;
**********************************************************************************/
static int Utoprim_new_body(eos_struct eos, CCTK_REAL U[NPR], CCTK_REAL gcov[NDIM][NDIM],
CCTK_REAL gcon[NDIM][NDIM], CCTK_REAL gdet, CCTK_REAL prim[NPR], long &n_iter)
{
CCTK_REAL x_2d[NEWT_DIM];
CCTK_REAL QdotB,Bcon[NDIM],Bcov[NDIM],Qcov[NDIM],Qcon[NDIM],ncov[NDIM],ncon[NDIM],Qsq,Qtcon[NDIM];
CCTK_REAL rho0,u,p,w,gammasq,gamma,gtmp,W_last,W,utsq,vsq;
int i,j, n, retval, i_increase;
n = NEWT_DIM ;
// Assume ok initially:
retval = 0; | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.b.i: Computing basic quantities \[Back to [top](toc)\]$$\label{utoprim_new_body__basic_quantities}$$We start by computing basic quantities from the input variables. Notice that this conservative-to-primitive algorithm does not need to update the magnetic field, thus$$\boxed{B_{\rm prim}^{i} = B_{\rm conserv}^{i}}\ .$$Since they are both equal, we will not distinguish between prim and conserv in what follows. We also set $B^{0} = 0$. Then we define$$\boxed{Q_{\mu} \equiv \alpha T^{0}_{\ \ \mu}}\ .$$From these, the following quantities are then computed:$$\boxed{\begin{align}B_{i} &= g_{i\mu}B^{\mu}\\Q^{\mu} &= g^{\mu\nu}Q_{\nu}\\B^{2} &= B_{i}B^{i}\\Q\cdot B &= Q_{\mu}B^{\mu}\\\left(Q\cdot B\right)^{2} &= \left(Q\cdot B\right)\left(Q\cdot B\right)\\n_{\mu} &= \left(-\alpha,0,0,0\right)\\n^{\mu} &= g^{\mu\nu}n_{\nu}\\\left(Q\cdot n\right) &= Q^{\mu}n_{\mu}\\Q^{2} &= Q_{\mu}Q^{\mu}\\\tilde{Q}^{2} &= Q^{2} + \left(Q\cdot n\right)\left(Q\cdot n\right)\\D &\equiv \gamma \rho_{b}\end{align}}\ .$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
for(i = BCON1; i <= BCON3; i++) prim[i] = U[i] ;
// Calculate various scalars (Q.B, Q^2, etc) from the conserved variables:
Bcon[0] = 0. ;
for(i=1;i<4;i++) Bcon[i] = U[BCON1+i-1] ;
lower_g(Bcon,gcov,Bcov) ;
for(i=0;i<4;i++) Qcov[i] = U[QCOV0+i] ;
raise_g(Qcov,gcon,Qcon) ;
CCTK_REAL Bsq = 0. ;
for(i=1;i<4;i++) Bsq += Bcon[i]*Bcov[i] ;
QdotB = 0. ;
for(i=0;i<4;i++) QdotB += Qcov[i]*Bcon[i] ;
CCTK_REAL QdotBsq = QdotB*QdotB ;
ncov_calc(gcon,ncov) ;
// FIXME: The exact form of n^{\mu} can be found
// in eq. (2.116) and implementing it
// directly is a lot more efficient than
// performing n^{\mu} = g^{\mu\nu}n_{nu}
raise_g(ncov,gcon,ncon);
CCTK_REAL Qdotn = Qcon[0]*ncov[0] ;
Qsq = 0. ;
for(i=0;i<4;i++) Qsq += Qcov[i]*Qcon[i] ;
CCTK_REAL Qtsq = Qsq + Qdotn*Qdotn ;
CCTK_REAL D = U[RHO] ; | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.b.ii: Determining $W$ from the previous iteration, $W_{\rm last}$ \[Back to [top](toc)\]$$\label{utoprim_new_body__wlast}$$The quantity $W$ is defined as$$W \equiv w\gamma^{2}\ ,$$where$$\begin{align}w &= \rho_{b} + u + p\ ,\\\gamma^{2} &= 1 + g_{ij}\tilde{u}^{i}\tilde{u}^{j}\ .\end{align}$$Thus the quantities $g_{ij}\tilde{u}^{i}\tilde{u}^{j}$ and then $\gamma^{2}$ and $\gamma$. Thus, by computing $\rho_{b}$ and $p$ from the input variables, i.e. $D$, one can determine $w$ and then compute the value of $W$ from the input values (previous iteration), which we denote by $W_{\rm last}$.**Dependecy note:** Note that this function depends on the `pressure_rho0_u()` function, which is *not* EOS independent. | %%writefile -a $outfile_path__harm_utoprim_2d__c
/* calculate W from last timestep and use for guess */
utsq = 0. ;
for(i=1;i<4;i++)
for(j=1;j<4;j++) utsq += gcov[i][j]*prim[UTCON1+i-1]*prim[UTCON1+j-1] ;
if( (utsq < 0.) && (fabs(utsq) < 1.0e-13) ) {
utsq = fabs(utsq);
}
if(utsq < 0. || utsq > UTSQ_TOO_BIG) {
retval = 2;
return(retval) ;
}
gammasq = 1. + utsq ;
gamma = sqrt(gammasq);
// Always calculate rho from D and gamma so that using D in EOS remains consistent
// i.e. you don't get positive values for dP/d(vsq) .
rho0 = D / gamma ;
u = prim[UU] ;
p = pressure_rho0_u(eos, rho0,u) ;
w = rho0 + u + p ;
W_last = w*gammasq ;
// Make sure that W is large enough so that v^2 < 1 :
i_increase = 0;
while( (( W_last*W_last*W_last * ( W_last + 2.*Bsq )
- QdotBsq*(2.*W_last + Bsq) ) <= W_last*W_last*(Qtsq-Bsq*Bsq))
&& (i_increase < 10) ) {
W_last *= 10.;
i_increase++;
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.b.iii: Compute $v^{2}_{\rm last}$, then update $v^{2}$ and $W$ \[Back to [top](toc)\]$$\label{utoprim_new_body__vsqlast_and_recompute_w_and_vsq}$$Then we use equation (28) in [Noble *et al.* (2006)](https://arxiv.org/abs/astro-ph/0512420) to determine $v^{2}$:$$\boxed{v^{2} = \frac{\tilde{Q}^{2}W^{2} + \left(Q\cdot B\right)^{2}\left(B^{2}+2W\right)}{\left(B^{2}+W\right)^{2}W^{2}}}\ .$$This is done by calling the `x1_of_x0()` function, where $x_{0} = W$ and $x_{1} = v^{2}$, which itself calls the `vsq_calc()` function which implements the boxed equation above.After we have $\left\{W_{\rm last},v^{2}_{\rm last}\right\}$ we use them as the initial guess for the `general_newton_raphson()`, which returns the updated values $\left\{W,v^{2}\right\}$.All functions mentioned above are documented in this tutorial notebook, so look at the [Table of Contents](toc) for more information. | %%writefile -a $outfile_path__harm_utoprim_2d__c
// Calculate W and vsq:
x_2d[0] = fabs( W_last );
x_2d[1] = x1_of_x0( W_last , Bsq,QdotBsq,Qtsq,Qdotn,D) ;
retval = general_newton_raphson( eos, x_2d, n, n_iter, func_vsq, Bsq,QdotBsq,Qtsq,Qdotn,D) ;
W = x_2d[0];
vsq = x_2d[1];
/* Problem with solver, so return denoting error before doing anything further */
if( (retval != 0) || (W == FAIL_VAL) ) {
retval = retval*100+1;
return(retval);
}
else{
if(W <= 0. || W > W_TOO_BIG) {
retval = 3;
return(retval) ;
}
}
// Calculate v^2:
if( vsq >= 1. ) {
vsq = 1.-2.e-16;
//retval = 4;
//return(retval) ;
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.b.iv: Computing the primitive variables \[Back to [top](toc)\]$$\label{utoprim_new_body__compute_prims}$$Now that we have $\left\{W,v^{2}\right\}$, we recompute the primitive variables. We start with$$\left\{\begin{align}\tilde{g} &\equiv \sqrt{1-v^{2}}\\\gamma &= \frac{1}{\tilde{g}}\end{align}\right.\implies\boxed{\rho_{b} = D\tilde{g}}\ .$$Then, we determine the pressure $p$ using the `pressure_rho0_w()` function and$$w = W\left(1-v^{2}\right)\implies\boxed{u = w - \left(\rho_{b} + p\right)}\ .$$**Dependecy note:** Note that this function depends on the `pressure_rho0_w()` function, which is *not* EOS independent.Finally, we can obtain $\tilde{u}^{i}$ using eq. 31 in [Noble *et al.* (2006)](https://arxiv.org/abs/astro-ph/0512420)$$\boxed{\tilde{u}^{i} = \frac{\gamma}{\left(W+B^{2}\right)}\left[\tilde{Q}^{i} + \frac{\left(Q\cdot B\right)}{W}B^{i}\right]}\ ,$$where$$\tilde{Q}^{i} = Q^{i} + \left(Q\cdot n\right)n^{i}\ .$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
// Recover the primitive variables from the scalars and conserved variables:
gtmp = sqrt(1. - vsq);
gamma = 1./gtmp ;
rho0 = D * gtmp;
w = W * (1. - vsq) ;
p = pressure_rho0_w(eos, rho0,w) ;
u = w - (rho0 + p) ; // u = rho0 eps, w = rho0 h
if( (rho0 <= 0.) || (u <= 0.) ) {
// User may want to handle this case differently, e.g. do NOT return upon
// a negative rho/u, calculate v^i so that rho/u can be floored by other routine:
retval = 5;
//return(retval) ;
}
/*
if(retval==5 && fabs(u)<1e-16) {
u = fabs(u);
CCTK_VInfo(CCTK_THORNSTRING,"%e\t%e\t%e",1.0-w/(rho0 + p),rho0,p);
retval=0;
}
*/
prim[RHO] = rho0 ;
prim[UU] = u ;
for(i=1;i<4;i++) Qtcon[i] = Qcon[i] + ncon[i] * Qdotn;
for(i=1;i<4;i++) prim[UTCON1+i-1] = gamma/(W+Bsq) * ( Qtcon[i] + QdotB*Bcon[i]/W ) ;
/* set field components */
for(i = BCON1; i <= BCON3; i++) prim[i] = U[i] ;
/* done! */
return(retval) ;
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.c: The `vsq_calc()` function \[Back to [top](toc)\]$$\label{vsq_calc}$$This function implements eq. (28) in [Noble *et al.* (2006)](https://arxiv.org/abs/astro-ph/0512420) to determine $v^{2}$:$$\boxed{v^{2} = \frac{\tilde{Q}^{2}W^{2} + \left(Q\cdot B\right)^{2}\left(B^{2}+2W\right)}{\left(B^{2}+W\right)^{2}W^{2}}}\ .$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
/**********************************************************************/
/****************************************************************************
vsq_calc():
-- evaluate v^2 (spatial, normalized velocity) from
W = \gamma^2 w
****************************************************************************/
static CCTK_REAL vsq_calc(CCTK_REAL W,CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D)
{
CCTK_REAL Wsq,Xsq;
Wsq = W*W ;
Xsq = (Bsq + W) * (Bsq + W);
return( ( Wsq * Qtsq + QdotBsq * (Bsq + 2.*W)) / (Wsq*Xsq) );
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.d: The `x1_of_x0()` function \[Back to [top](toc)\]$$\label{x1_of_x0}$$This function computes $v^{2}$, as described [above](vsq_calc), then performs physical checks on $v^{2}$ (i.e. whether or not it is superluminal). This function assumes $W$ is physical. | %%writefile -a $outfile_path__harm_utoprim_2d__c
/********************************************************************
x1_of_x0():
-- calculates v^2 from W with some physical bounds checking;
-- asumes x0 is already physical
-- makes v^2 physical if not;
*********************************************************************/
static CCTK_REAL x1_of_x0(CCTK_REAL x0, CCTK_REAL &Bsq, CCTK_REAL &QdotBsq, CCTK_REAL &Qtsq, CCTK_REAL &Qdotn, CCTK_REAL &D )
{
CCTK_REAL vsq;
CCTK_REAL dv = 1.e-15;
vsq = fabs(vsq_calc(x0,Bsq,QdotBsq,Qtsq,Qdotn,D)) ; // guaranteed to be positive
return( ( vsq > 1. ) ? (1.0 - dv) : vsq );
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.e: The `validate_x()` function \[Back to [top](toc)\]$$\label{validate_x}$$This function performs physical tests on $\left\{W,v^{2}\right\}$ based on their definitions. | %%writefile -a $outfile_path__harm_utoprim_2d__c
/********************************************************************
validate_x():
-- makes sure that x[0,1] have physical values, based upon
their definitions:
*********************************************************************/
static void validate_x(CCTK_REAL x[2], CCTK_REAL x0[2] )
{
CCTK_REAL dv = 1.e-15;
/* Always take the absolute value of x[0] and check to see if it's too big: */
x[0] = fabs(x[0]);
x[0] = (x[0] > W_TOO_BIG) ? x0[0] : x[0];
x[1] = (x[1] < 0.) ? 0. : x[1]; /* if it's too small */
x[1] = (x[1] > 1.) ? (1. - dv) : x[1]; /* if it's too big */
return;
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.f: The `general_newton_raphson()` function \[Back to [top](toc)\]$$\label{general_newton_raphson}$$This function implements a [multidimensional Newton-Raphson method](https://en.wikipedia.org/wiki/Newton%27s_methodk_variables,_k_functions). We will not make the effort of explaining the algorithm exhaustively since it is pretty standard, so we will settle for a summary of the method.Given a system of $N$ non-linear of equations and $N$ variables, $\left\{\vec{F}\!\left(\vec{x}\right),\vec{x}\right\}$, the Newton-Raphson method attempts to determine the root vector, $\vec{x}_{\star}$, iteratively through$$\begin{align}\vec{x}_{n+1} = \vec{x}_{n} - J^{-1}_{F}\!\left(\vec{x}_{n}\right)\vec{F}\!\left(\vec{x}\right)\ ,\end{align}$$where $J^{-1}_{F}$ is the Jacobian matrix$$\left(J_{F}\right)^{i}_{\ \ j} = \frac{\partial F^{i}}{\partial x^{j}}\ .$$The index $n$ above is an *iteration* index and $\vec{x}_{n+1}$ represents an improved approximation to $\vec{x}_{\star}$ when compared to $\vec{x}_{n}$. | %%writefile -a $outfile_path__harm_utoprim_2d__c
/************************************************************
general_newton_raphson():
-- performs Newton-Rapshon method on an arbitrary system.
-- inspired in part by Num. Rec.'s routine newt();
*****************************************************************/
static int general_newton_raphson( eos_struct eos, CCTK_REAL x[], int n, long &n_iter,
void (*funcd) (eos_struct, CCTK_REAL [], CCTK_REAL [], CCTK_REAL [],
CCTK_REAL [][NEWT_DIM], CCTK_REAL *,
CCTK_REAL *, int,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &),CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D)
{
CCTK_REAL f, df, dx[NEWT_DIM], x_old[NEWT_DIM];
CCTK_REAL resid[NEWT_DIM], jac[NEWT_DIM][NEWT_DIM];
CCTK_REAL errx, x_orig[NEWT_DIM];
int id, i_extra, doing_extra;
int keep_iterating;
// Initialize various parameters and variables:
errx = 1. ;
df = f = 1.;
i_extra = doing_extra = 0;
for( id = 0; id < n ; id++) x_old[id] = x_orig[id] = x[id] ;
n_iter = 0;
/* Start the Newton-Raphson iterations : */
keep_iterating = 1;
while( keep_iterating ) {
(*funcd) (eos, x, dx, resid, jac, &f, &df, n, Bsq,QdotBsq,Qtsq,Qdotn,D); /* returns with new dx, f, df */
/* Save old values before calculating the new: */
errx = 0.;
for( id = 0; id < n ; id++) {
x_old[id] = x[id] ;
}
/* Make the newton step: */
for( id = 0; id < n ; id++) {
x[id] += dx[id] ;
}
/****************************************/
/* Calculate the convergence criterion */
/****************************************/
errx = (x[0]==0.) ? fabs(dx[0]) : fabs(dx[0]/x[0]);
/****************************************/
/* Make sure that the new x[] is physical : */
/****************************************/
validate_x( x, x_old ) ;
/*****************************************************************************/
/* If we've reached the tolerance level, then just do a few extra iterations */
/* before stopping */
/*****************************************************************************/
if( (fabs(errx) <= NEWT_TOL) && (doing_extra == 0) && (EXTRA_NEWT_ITER > 0) ) {
doing_extra = 1;
}
if( doing_extra == 1 ) i_extra++ ;
if( ((fabs(errx) <= NEWT_TOL)&&(doing_extra == 0))
|| (i_extra > EXTRA_NEWT_ITER) || (n_iter >= (MAX_NEWT_ITER-1)) ) {
keep_iterating = 0;
}
n_iter++;
} // END of while(keep_iterating)
/* Check for bad untrapped divergences : */
if( (finite(f)==0) || (finite(df)==0) ) {
return(2);
}
if( fabs(errx) > MIN_NEWT_TOL){
//CCTK_VInfo(CCTK_THORNSTRING,"%d %e %e %e %e",n_iter,f,df,errx,MIN_NEWT_TOL);
return(1);
}
if( (fabs(errx) <= MIN_NEWT_TOL) && (fabs(errx) > NEWT_TOL) ){
return(0);
}
if( fabs(errx) <= NEWT_TOL ){
return(0);
}
return(0);
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 2.g: The `func_vsq()` function \[Back to [top](toc)\]$$\label{func_vsq}$$This function is used by the `general_newton_raphson()` function to compute the residuals and stepping. We will again not describe it in great detail since the method itself is relatively straightforward. | %%writefile -a $outfile_path__harm_utoprim_2d__c
/**********************************************************************/
/*********************************************************************************
func_vsq():
-- calculates the residuals, and Newton step for general_newton_raphson();
-- for this method, x=W,vsq here;
Arguments:
x = current value of independent var's (on input & output);
dx = Newton-Raphson step (on output);
resid = residuals based on x (on output);
jac = Jacobian matrix based on x (on output);
f = resid.resid/2 (on output)
df = -2*f; (on output)
n = dimension of x[];
*********************************************************************************/
static void func_vsq(eos_struct eos, CCTK_REAL x[], CCTK_REAL dx[], CCTK_REAL resid[],
CCTK_REAL jac[][NEWT_DIM], CCTK_REAL *f, CCTK_REAL *df, int n,
CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D)
{
CCTK_REAL W, vsq, Wsq, p_tmp, dPdvsq, dPdW;
CCTK_REAL t11;
CCTK_REAL t16;
CCTK_REAL t18;
CCTK_REAL t2;
CCTK_REAL t21;
CCTK_REAL t23;
CCTK_REAL t24;
CCTK_REAL t25;
CCTK_REAL t3;
CCTK_REAL t35;
CCTK_REAL t36;
CCTK_REAL t4;
CCTK_REAL t40;
CCTK_REAL t9;
// vv TESTING vv
// CCTK_REAL D,gtmp,gamma,rho0,w,p,u;
// ^^ TESTING ^^
W = x[0];
vsq = x[1];
Wsq = W*W;
// vv TESTING vv
/*
D = U[RHO] ;
gtmp = sqrt(1. - vsq);
gamma = 1./gtmp ;
rho0 = D * gtmp;
w = W * (1. - vsq) ;
p = pressure_rho0_w(eos, rho0,w) ;
u = w - (rho0 + p) ;
if(u<=0 && 1==1) {
vsq = 0.9999999 * (1.0-(rho0+p)/W);
w = W * (1. - vsq) ;
p = pressure_rho0_w(eos, rho0,w) ;
u = w - (rho0 + p) ;
//CCTK_VInfo(CCTK_THORNSTRING,"%e check",u);
}
*/
// ^^ TESTING ^^
p_tmp = pressure_W_vsq( eos, W, vsq , D);
dPdW = dpdW_calc_vsq( W, vsq );
dPdvsq = dpdvsq_calc( eos, W, vsq, D );
// These expressions were calculated using Mathematica, but made into efficient
// code using Maple. Since we know the analytic form of the equations, we can
// explicitly calculate the Newton-Raphson step:
t2 = -0.5*Bsq+dPdvsq;
t3 = Bsq+W;
t4 = t3*t3;
t9 = 1/Wsq;
t11 = Qtsq-vsq*t4+QdotBsq*(Bsq+2.0*W)*t9;
t16 = QdotBsq*t9;
t18 = -Qdotn-0.5*Bsq*(1.0+vsq)+0.5*t16-W+p_tmp;
t21 = 1/t3;
t23 = 1/W;
t24 = t16*t23;
t25 = -1.0+dPdW-t24;
t35 = t25*t3+(Bsq-2.0*dPdvsq)*(QdotBsq+vsq*Wsq*W)*t9*t23;
t36 = 1/t35;
dx[0] = -(t2*t11+t4*t18)*t21*t36;
t40 = (vsq+t24)*t3;
dx[1] = -(-t25*t11-2.0*t40*t18)*t21*t36;
//detJ = t3*t35; // <- set but not used...
jac[0][0] = -2.0*t40;
jac[0][1] = -t4;
jac[1][0] = t25;
jac[1][1] = t2;
resid[0] = t11;
resid[1] = t18;
*df = -resid[0]*resid[0] - resid[1]*resid[1];
*f = -0.5 * ( *df );
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 3: EOS dependent routines \[Back to [top](toc)\]$$\label{harm_utoprim_2d__c__eos_dep}$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
/**********************************************************************
**********************************************************************
The following routines specify the equation of state. All routines
above here should be indpendent of EOS. If the user wishes
to use another equation of state, the below functions must be replaced
by equivalent routines based upon the new EOS.
**********************************************************************
**********************************************************************/ | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 3.a: The `pressure_W_vsq()` function \[Back to [top](toc)\]$$\label{pressure_w_vsq}$$This function computes $p\left(W,v^{2}\right)$. For a $\Gamma$-law equation of state,$$p_{\Gamma} = \left(\Gamma-1\right)u\ ,$$and with the definitions$$\begin{align}\gamma^{2} &= \frac{1}{1-v^{2}}\ ,\\W &= \gamma^{2}w\ ,\\D &= \gamma\rho_{b}\ ,\\w &= \rho_{b} + u + p\ ,\end{align}$$we have$$\begin{align}p_{\Gamma} &= \left(\Gamma-1\right)u\\ &= \left(\Gamma-1\right)\left(w - \rho_{b} - p_{\Gamma}\right)\\ &= \left(\Gamma-1\right)\left(\frac{W}{\gamma^{2}} - \frac{D}{\gamma}\right) - \left(\Gamma-1\right)p_{\Gamma}\\\implies&\boxed{p_{\Gamma} = \frac{\left(\Gamma-1\right)}{\Gamma}\left(\frac{W}{\gamma^{2}} - \frac{D}{\gamma}\right)}\ .\end{align}$$Thus, the pre-PPEOS Patch version of this function was```c/**********************************************************************//********************************************************************** pressure_W_vsq(): -- Gamma-law equation of state; -- pressure as a function of W, vsq, and D:**********************************************************************/static CCTK_REAL pressure_W_vsq(CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D) { CCTK_REAL gtmp; gtmp = 1. - vsq; return( (GAMMA - 1.) * ( W * gtmp - D * sqrt(gtmp) ) / GAMMA );}```We are now, however, interested in the hybrid EOS of the form$$p_{\rm hybrid} = P_{\rm cold} + P_{\rm th}\ ,$$where $P_{\rm cold}$ is given by a single or piecewise polytrope EOS,$$P_{\rm cold} = K_{i}\rho_{b}^{\Gamma_{i}}\ ,$$$P_{\rm th}$ accounts for thermal effects and is given by$$P_{\rm th} = \left(\Gamma_{\rm th} - 1\right)\epsilon_{\rm th}\ ,$$and$$\begin{align}\epsilon \equiv \frac{u}{\rho_{b}} &= \epsilon_{\rm th}+\epsilon_{\rm cold}\ ,\\\epsilon_{\rm cold} &= \int d\rho \frac{P_{\rm cold}(\rho)}{\rho^{2}}\ .\end{align}$$We then have$$\begin{align}p_{\rm hybrid} &= P_{\rm cold} + P_{\rm th}\\ &= P_{\rm cold} + \left(\Gamma_{\rm th}-1\right)\rho_{b}\epsilon_{\rm th}\\ &= P_{\rm cold} + \left(\Gamma_{\rm th}-1\right)\rho_{b}\left(\epsilon - \epsilon_{\rm cold}\right)\\ &= P_{\rm cold} + \left(\Gamma_{\rm th}-1\right)\left(u - \frac{D}{\gamma}\epsilon_{\rm cold}\right)\\ &= P_{\rm cold} + \left(\Gamma_{\rm th}-1\right)\left(w - \rho_{b} - p_{\rm hybrid} - \frac{D}{\gamma}\epsilon_{\rm cold}\right)\\ &= P_{\rm cold} + \left(\Gamma_{\rm th}-1\right)\left(\frac{W}{\gamma^{2}} - \frac{D}{\gamma} - \frac{D}{\gamma}\epsilon_{\rm cold}\right)-\left(\Gamma_{\rm th}-1\right)p_{\rm hybrid}\\ &= P_{\rm cold} + \left(\Gamma_{\rm th}-1\right)\left[\frac{W}{\gamma^{2}} - \frac{D}{\gamma}\left(1+\epsilon_{\rm cold}\right)\right]-\left(\Gamma_{\rm th}-1\right)p_{\rm hybrid}\\\implies&\boxed{ p_{\rm hybrid} = \frac{P_{\rm cold}}{\Gamma_{\rm th}} + \frac{\left(\Gamma_{\rm th}-1\right)}{\Gamma_{\rm th}}\left[\frac{W}{\gamma^{2}} - \frac{D}{\gamma}\left(1+\epsilon_{\rm cold}\right)\right] }\end{align}$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
/**********************************************************************/
/**********************************************************************
pressure_W_vsq():
-- Hybrid single and piecewise polytropic equation of state;
-- pressure as a function of P_cold, eps_cold, W, vsq, and D:
**********************************************************************/
static CCTK_REAL pressure_W_vsq(eos_struct eos, CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D)
{
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
DECLARE_CCTK_PARAMETERS;
#endif
// Compute gamma^{-2} = 1 - v^{2} and gamma^{-1}
CCTK_REAL inv_gammasq = 1.0 - vsq;
CCTK_REAL inv_gamma = sqrt(inv_gammasq);
// Compute rho_b = D / gamma
CCTK_REAL rho_b = D*inv_gamma;
// Compute P_cold and eps_cold
CCTK_REAL P_cold, eps_cold;
compute_P_cold__eps_cold(eos,rho_b, P_cold,eps_cold);
// Compute p = P_{cold} + P_{th}
return( ( P_cold + (Gamma_th - 1.0)*( W*inv_gammasq - D*inv_gamma*( 1.0 + eps_cold ) ) )/Gamma_th );
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 3.b: The `dpdW_calc_vsq()` function \[Back to [top](toc)\]$$\label{dpdw_calc_vsq}$$This function computes $\frac{\partial p\left(W,v^{2}\right)}{\partial W}$. For a $\Gamma$-law equation of state, remember that$$p_{\Gamma} = \frac{\left(\Gamma-1\right)}{\Gamma}\left(\frac{W}{\gamma^{2}} - \frac{D}{\gamma}\right)\ ,$$which then implies$$\boxed{\frac{\partial p_{\Gamma}}{\partial W} = \frac{\Gamma-1}{\Gamma \gamma^{2}} = \frac{\left(\Gamma-1\right)\left(1-v^{2}\right)}{\Gamma}}\ .$$Thus, the pre-PPEOS Patch version of this function was```c/**********************************************************************//********************************************************************** dpdW_calc_vsq(): -- partial derivative of pressure with respect to W;**********************************************************************/static CCTK_REAL dpdW_calc_vsq(CCTK_REAL W, CCTK_REAL vsq){ return( (GAMMA - 1.) * (1. - vsq) / GAMMA ) ;}```For the case of a hybrid, single or piecewise polytropic EOS, we have$$p_{\rm hybrid} = \frac{P_{\rm cold}}{\Gamma_{\rm th}} + \frac{\left(\Gamma_{\rm th}-1\right)}{\Gamma_{\rm th}}\left[\frac{W}{\gamma^{2}} - \frac{D}{\gamma}\left(1+\epsilon_{\rm cold}\right)\right]\ .$$It is important to notice that the cold components of $p_{\rm hybrid}$ are *not* functions of $W$, but instead functions of $D$: $P_{\rm cold} = P_{\rm cold}(\rho_{b}) = P_{\rm cold}(D)$ and $\epsilon_{\rm cold} = \epsilon_{\rm cold}(\rho_{b}) = \epsilon_{\rm cold}(D)$. Thus$$\boxed{\frac{\partial p_{\rm hybrid}}{\partial W} = \frac{\Gamma_{\rm th}-1}{\Gamma_{\rm th} \gamma^{2}} = \frac{\left(\Gamma_{\rm th}-1\right)\left(1-v^{2}\right)}{\Gamma_{\rm th}}}\ .$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
/**********************************************************************/
/**********************************************************************
dpdW_calc_vsq():
-- partial derivative of pressure with respect to W;
**********************************************************************/
static CCTK_REAL dpdW_calc_vsq(CCTK_REAL W, CCTK_REAL vsq)
{
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
DECLARE_CCTK_PARAMETERS;
#endif
return( (Gamma_th - 1.0) * (1.0 - vsq) / Gamma_th ) ;
} | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 3.c: The `dpdvsq_calc()` function \[Back to [top](toc)\]$$\label{dpdvsq_calc}$$This function computes $\frac{\partial p\left(W,v^{2}\right)}{\partial W}$. For a $\Gamma$-law equation of state, remember that$$p_{\Gamma} = \frac{\left(\Gamma-1\right)}{\Gamma}\left(\frac{W}{\gamma^{2}} - \frac{D}{\gamma}\right) = \frac{\left(\Gamma-1\right)}{\Gamma}\left[W\left(1-v^{2}\right) - D\sqrt{1-v^{2}}\right]\ ,$$which then implies$$\boxed{\frac{\partial p_{\Gamma}}{\partial\left(v^{2}\right)} = \frac{\Gamma-1}{\Gamma}\left(\frac{D}{2\sqrt{1-v^{2}}}-W\right)} \ .$$Thus, the pre-PPEOS Patch version of this function was```c/**********************************************************************//********************************************************************** dpdvsq_calc(): -- partial derivative of pressure with respect to vsq**********************************************************************/static CCTK_REAL dpdvsq_calc(CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D){ return( (GAMMA - 1.) * ( 0.5 * D / sqrt(1.-vsq) - W ) / GAMMA ) ;}``` Step 3.c.i: Setting basic quantities and computing $P_{\rm cold}$ and $\epsilon_{\rm cold}$ \[Back to [top](toc)\]$$\label{dpdvsq_calc__basic_quantities}$$For the case of a hybrid, single or piecewise polytropic EOS, we have$$p_{\rm hybrid} = \frac{P_{\rm cold}}{\Gamma_{\rm th}} + \frac{\left(\Gamma_{\rm th}-1\right)}{\Gamma_{\rm th}}\left[\frac{W}{\gamma^{2}} - \frac{D}{\gamma}\left(1+\epsilon_{\rm cold}\right)\right]\ .$$Let us thus begin by setting the necessary parameters from the hybrid EOS. | %%writefile -a $outfile_path__harm_utoprim_2d__c
/**********************************************************************/
/**********************************************************************
dpdvsq_calc():
-- partial derivative of pressure with respect to vsq
**********************************************************************/
static CCTK_REAL dpdvsq_calc(eos_struct eos, CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D)
{
// This sets Gamma_th
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
DECLARE_CCTK_PARAMETERS;
#endif
// Set gamma and rho
CCTK_REAL gamma = 1.0/sqrt(1.0 - vsq);
CCTK_REAL rho_b = D/gamma;
// Compute P_cold and eps_cold
CCTK_REAL P_cold, eps_cold;
compute_P_cold__eps_cold(eos,rho_b, P_cold,eps_cold);
// Set basic polytropic quantities
int polytropic_index = find_polytropic_K_and_Gamma_index(eos,rho_b);
CCTK_REAL Gamma_ppoly_tab = eos.Gamma_ppoly_tab[polytropic_index]; | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 3.c.ii: Computing $\frac{\partial P_{\rm cold}}{\partial\left(v^{2}\right)}$ \[Back to [top](toc)\]$$\label{dpdvsq_calc__dpcolddvsq}$$Next, remember that $P_{\rm cold} = P_{\rm cold}(\rho_{b}) = P_{\rm cold}(D,v^{2})$ and also $\epsilon_{\rm cold} = \epsilon_{\rm cold}(D,v^{2})$. Therefore, we must start by finding the derivatives of $P_{\rm cold}$ and $\epsilon_{\rm cold}$ with respect to $v^{2}$.Let us first notice that$$\frac{\partial\gamma}{\partial\left(v^{2}\right)} = \frac{\partial}{\partial\left(v^{2}\right)}\left[\frac{1}{\sqrt{1-v^{2}}}\right] = \frac{1}{2}\left(1-v^{2}\right)^{-3/2} = \frac{\gamma^{3}}{2}\ .$$Thus, for a general power$$\frac{\partial\gamma^{a}}{\partial\left(v^{2}\right)} = a\gamma^{a-1}\frac{\partial\gamma}{\partial\left(v^{2}\right)} = a\gamma^{a-1}\left(\frac{\gamma^{3}}{2}\right) = \frac{a}{2}\gamma^{a+2}$$Thus we have$$\begin{align}\frac{\partial P_{\rm cold}}{\partial \left(v^{2}\right)}&= \frac{\partial}{\partial\left(v^{2}\right)}\left(K_{\rm poly}\rho_{b}^{\Gamma_{\rm poly}}\right)\\&= \frac{\partial}{\partial\left(v^{2}\right)}\left[K_{\rm poly}\left(\frac{D}{\gamma}\right)^{\Gamma_{\rm poly}}\right]\\&= K_{\rm poly}D^{\Gamma_{\rm poly}}\frac{\partial}{\partial\left(v^{2}\right)}\left[\gamma^{-\Gamma_{\rm poly}/2}\right]\\&=K_{\rm poly}D^{\Gamma_{\rm poly}}\left[\frac{-\Gamma_{\rm poly}/2}{2}\gamma^{-\Gamma_{\rm poly}/2 + 2}\right]\\&=K_{\rm poly}\left(\frac{D}{\gamma}\right)^{\Gamma_{\rm poly}}\gamma^{-\frac{\Gamma_{\rm poly}}{2} + 2 + \Gamma_{\rm poly}}\\\implies &\boxed{ \frac{\partial P_{\rm cold}}{\partial \left(v^{2}\right)} = \gamma^{2+\frac{\Gamma_{\rm poly}}{2}}P_{\rm cold}}\ .\end{align}$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
/* Now we implement the derivative of P_cold with respect
* to v^{2}, given by
* ----------------------------------------------------
* | dP_cold/dvsq = gamma^{2 + Gamma_{poly}/2} P_{cold} |
* ----------------------------------------------------
*/
CCTK_REAL dPcold_dvsq = P_cold * pow(gamma,2.0 + 0.5*Gamma_ppoly_tab); | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 3.c.iii: Computing $\frac{\partial \epsilon_{\rm cold}}{\partial\left(v^{2}\right)}$ \[Back to [top](toc)\]$$\label{dpdvsq_calc__depscolddvsq}$$Now, obtaining $\epsilon_{\rm cold}$ from $P_{\rm cold}$ requires an integration and, therefore, generates an integration constant. Since we are interested in a *derivative* of $\epsilon_{\rm cold}$, however, we will simply drop the constant altogether. Remember that:$$\epsilon_{\rm cold} = K_{\rm poly}\int d\rho_{b} \rho_{b}^{\Gamma_{\rm poly}-2} = \frac{K_{\rm poly}\rho_{b}^{\Gamma_{\rm poly}-1}}{\Gamma_{\rm poly}-1} = \frac{P_{\rm cold}}{\rho_{b}\left(\Gamma_{\rm poly}-1\right)} = \frac{\gamma P_{\rm cold}}{D\left(\Gamma_{\rm poly}-1\right)}\ .$$Thus$$\begin{align}\frac{\partial \epsilon_{\rm cold}}{\partial \left(v^{2}\right)}&= \frac{1}{D\left(\Gamma_{\rm poly}-1\right)}\left[\gamma\frac{\partial P_{\rm cold}}{\partial \left(v^{2}\right)} + P_{\rm cold}\frac{\partial\gamma}{\partial \left(v^{2}\right)}\right]\\&=\frac{1}{D\left(\Gamma_{\rm poly}-1\right)}\left[\gamma\frac{\partial P_{\rm cold}}{\partial \left(v^{2}\right)} + P_{\rm cold}\left(\frac{\gamma^{3}}{2}\right)\right]\\\implies &\boxed{\frac{\partial \epsilon_{\rm cold}}{\partial \left(v^{2}\right)} = \frac{\gamma}{D\left(\Gamma_{\rm poly}-1\right)}\left[\frac{\partial P_{\rm cold}}{\partial \left(v^{2}\right)} + \frac{\gamma^{2} P_{\rm cold}}{2}\right]\ .}\end{align}$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
/* Now we implement the derivative of eps_cold with respect
* to v^{2}, given by
* -----------------------------------------------------------------------------------
* | deps_cold/dvsq = gamma/(D*(Gamma_ppoly_tab-1)) * (dP_cold/dvsq + gamma^{2} P_cold / 2) |
* -----------------------------------------------------------------------------------
*/
CCTK_REAL depscold_dvsq = ( gamma/(D*(Gamma_ppoly_tab-1.0)) ) * ( dPcold_dvsq + 0.5*gamma*gamma*P_cold ); | Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 3.c.iv: Computing $\frac{\partial p_{\rm hybrid}}{\partial\left(v^{2}\right)}$ \[Back to [top](toc)\]$$\label{dpdvsq_calc__dpdvsq}$$Finally, remembering that$$\begin{align}p_{\rm hybrid} &= \frac{P_{\rm cold}}{\Gamma_{\rm th}} + \frac{\left(\Gamma_{\rm th}-1\right)}{\Gamma_{\rm th}}\left[\frac{W}{\gamma^{2}} - \frac{D}{\gamma}\left(1+\epsilon_{\rm cold}\right)\right]\ ,\\\frac{\partial\gamma^{a}}{\partial\left(v^{2}\right)} &= \frac{a}{2}\gamma^{a+2}\ ,\end{align}$$we have$$\boxed{\frac{\partial p_{\rm hybrid}}{\partial\left(v^{2}\right)}= \frac{1}{\Gamma_{\rm th}}\left\{\frac{\partial P_{\rm cold}}{\partial\left(v^{2}\right)} + \left(\Gamma_{\rm th}-1\right)\left[-W + \frac{D\gamma}{2}\left(1+\epsilon_{\rm cold}\right) - \frac{D}{\gamma}\frac{\partial \epsilon_{\rm cold}}{\partial\left(v^{2}\right)}\right]\right\}\ .}$$ | %%writefile -a $outfile_path__harm_utoprim_2d__c
/* Now we implement the derivative of p_hybrid with respect
* to v^{2}, given by
* -----------------------------------------------------------------------------
* | dp/dvsq = Gamma_th^{-1}( dP_cold/dvsq |
* | + (Gamma_{th}-1)*(-W |
* | + D gamma (1 + eps_cold)/2 |
* | - (D/gamma) * deps_cold/dvsq) ) |
* -----------------------------------------------------------------------------
*/
return( ( dPcold_dvsq + (Gamma_th-1.0)*( -W + D*gamma*(1+eps_cold)/2.0 - D*depscold_dvsq/gamma ) )/Gamma_th );
}
/******************************************************************************
END OF UTOPRIM_2D.C
******************************************************************************/
#endif
| Appending to ../src/harm_utoprim_2d.c
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 4: Code validation \[Back to [top](toc)\]$$\label{code_validation}$$First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook. | # Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/harm_utoprim_2d.c"
original_IGM_file_name = "harm_utoprim_2d-original.c"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__harm_utoprim_2d__c = !diff $original_IGM_file_path $outfile_path__harm_utoprim_2d__c
if Validation__harm_utoprim_2d__c == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for harm_utoprim_2d.c: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for harm_utoprim_2d.c: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__harm_utoprim_2d__c:
print(diff_line) | Validation test for harm_utoprim_2d.c: FAILED!
Diff:
0a1,2
> #ifndef __HARM_UTOPRIM_2D__C__
> #define __HARM_UTOPRIM_2D__C__
70,72c72,74
< static int Utoprim_new_body(CCTK_REAL U[], CCTK_REAL gcov[NDIM][NDIM], CCTK_REAL gcon[NDIM][NDIM], CCTK_REAL gdet, CCTK_REAL prim[],long &n_iter);
< static int general_newton_raphson( CCTK_REAL x[], int n, long &n_iter, void (*funcd) (CCTK_REAL [], CCTK_REAL [], CCTK_REAL [], CCTK_REAL [][NEWT_DIM], CCTK_REAL *, CCTK_REAL *, int,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &),CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D);
< static void func_vsq( CCTK_REAL [], CCTK_REAL [], CCTK_REAL [], CCTK_REAL [][NEWT_DIM], CCTK_REAL *f, CCTK_REAL *df, int n,CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D);
---
> static int Utoprim_new_body(eos_struct eos, CCTK_REAL U[], CCTK_REAL gcov[NDIM][NDIM], CCTK_REAL gcon[NDIM][NDIM], CCTK_REAL gdet, CCTK_REAL prim[],long &n_iter);
> static int general_newton_raphson( eos_struct eos, CCTK_REAL x[], int n, long &n_iter, void (*funcd) (eos_struct, CCTK_REAL [], CCTK_REAL [], CCTK_REAL [], CCTK_REAL [][NEWT_DIM], CCTK_REAL *, CCTK_REAL *, int,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &,CCTK_REAL &),CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D);
> static void func_vsq( eos_struct eos, CCTK_REAL [], CCTK_REAL [], CCTK_REAL [], CCTK_REAL [][NEWT_DIM], CCTK_REAL *f, CCTK_REAL *df, int n,CCTK_REAL &Bsq,CCTK_REAL &QdotBsq,CCTK_REAL &Qtsq,CCTK_REAL &Qdotn,CCTK_REAL &D);
74c76
< static CCTK_REAL pressure_W_vsq(CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D) ;
---
> static CCTK_REAL pressure_W_vsq(eos_struct eos, CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D) ;
76c78
< static CCTK_REAL dpdvsq_calc(CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D);
---
> static CCTK_REAL dpdvsq_calc(eos_struct eos, CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D);
89,97c91,99
< / rho u^t \
< U = | T^t_t + rho u^t | sqrt(-det(g_{\mu\nu}))
< | T^t_i |
< \ B^i /
<
< / rho \
< P = | uu |
< | \tilde{u}^i |
< \ B^i /
---
> / rho u^t \
> U = | T^t_t + rho u^t | sqrt(-det(g_{\mu\nu}))
> | T^t_i |
> \ B^i /
>
> / rho \
> P = | uu |
> | \tilde{u}^i |
> \ B^i /
101c103
< U[NPR] = conserved variables (current values on input/output);
---
> U[NPR] = conserved variables (current values on input/output);
105,106c107,108
< prim[NPR] = primitive variables (guess on input, calculated values on
< output if there are no problems);
---
> prim[NPR] = primitive variables (guess on input, calculated values on
> output if there are no problems);
114c116,117
< int Utoprim_2d(CCTK_REAL U[NPR], CCTK_REAL gcov[NDIM][NDIM], CCTK_REAL gcon[NDIM][NDIM],
---
>
> int Utoprim_2d(eos_struct eos, CCTK_REAL U[NPR], CCTK_REAL gcov[NDIM][NDIM], CCTK_REAL gcon[NDIM][NDIM],
130a134
>
141a146
>
150c155
< ret = Utoprim_new_body(U_tmp, gcov, gcon, gdet, prim_tmp,n_iter);
---
> ret = Utoprim_new_body(eos, U_tmp, gcov, gcon, gdet, prim_tmp,n_iter);
163a169
>
175,177c181,183
< / rho gamma \
< U = | alpha T^t_\mu |
< \ alpha B^i /
---
> / rho gamma \
> U = | alpha T^t_\mu |
> \ alpha B^i /
181,184c187,190
< / rho \
< prim = | uu |
< | \tilde{u}^i |
< \ alpha B^i /
---
> / rho \
> prim = | uu |
> | \tilde{u}^i |
> \ alpha B^i /
198c204
< 3 -> failure: W<0 or W>W_TOO_BIG
---
> 3 -> failure: W<0 or W>W_TOO_BIG
204c210
< static int Utoprim_new_body(CCTK_REAL U[NPR], CCTK_REAL gcov[NDIM][NDIM],
---
> static int Utoprim_new_body(eos_struct eos, CCTK_REAL U[NPR], CCTK_REAL gcov[NDIM][NDIM],
217a224
>
237a245,248
> // FIXME: The exact form of n^{\mu} can be found
> // in eq. (2.116) and implementing it
> // directly is a lot more efficient than
> // performing n^{\mu} = g^{\mu\nu}n_{nu}
248a260
>
270c282
< p = pressure_rho0_u(rho0,u) ;
---
> p = pressure_rho0_u(eos, rho0,u) ;
283a296
>
288c301
< retval = general_newton_raphson( x_2d, n, n_iter, func_vsq, Bsq,QdotBsq,Qtsq,Qdotn,D) ;
---
> retval = general_newton_raphson( eos, x_2d, n, n_iter, func_vsq, Bsq,QdotBsq,Qtsq,Qdotn,D) ;
311a325
>
318c332
< p = pressure_rho0_w(rho0,w) ;
---
> p = pressure_rho0_w(eos, rho0,w) ;
352a367
>
371a387
>
393a410
>
419a437
>
429,430c447,448
< static int general_newton_raphson( CCTK_REAL x[], int n, long &n_iter,
< void (*funcd) (CCTK_REAL [], CCTK_REAL [], CCTK_REAL [],
---
> static int general_newton_raphson( eos_struct eos, CCTK_REAL x[], int n, long &n_iter,
> void (*funcd) (eos_struct, CCTK_REAL [], CCTK_REAL [], CCTK_REAL [],
454c472
< (*funcd) (x, dx, resid, jac, &f, &df, n, Bsq,QdotBsq,Qtsq,Qdotn,D); /* returns with new dx, f, df */
---
> (*funcd) (eos, x, dx, resid, jac, &f, &df, n, Bsq,QdotBsq,Qtsq,Qdotn,D); /* returns with new dx, f, df */
522a541
>
540c559
< static void func_vsq(CCTK_REAL x[], CCTK_REAL dx[], CCTK_REAL resid[],
---
> static void func_vsq(eos_struct eos, CCTK_REAL x[], CCTK_REAL dx[], CCTK_REAL resid[],
579c598
< p = pressure_rho0_w(rho0,w) ;
---
> p = pressure_rho0_w(eos, rho0,w) ;
586c605
< p = pressure_rho0_w(rho0,w) ;
---
> p = pressure_rho0_w(eos, rho0,w) ;
595c614
< p_tmp = pressure_W_vsq( W, vsq , D);
---
> p_tmp = pressure_W_vsq( eos, W, vsq , D);
597c616
< dPdvsq = dpdvsq_calc( W, vsq, D );
---
> dPdvsq = dpdvsq_calc( eos, W, vsq, D );
635a655
>
646a667
>
651,652c672,673
< -- Gamma-law equation of state;
< -- pressure as a function of W, vsq, and D:
---
> -- Hybrid single and piecewise polytropic equation of state;
> -- pressure as a function of P_cold, eps_cold, W, vsq, and D:
654c675
< static CCTK_REAL pressure_W_vsq(CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D)
---
> static CCTK_REAL pressure_W_vsq(eos_struct eos, CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D)
655a677,678
>
> #ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
656a680
> #endif
658,661c682,694
< CCTK_REAL gtmp;
< gtmp = 1. - vsq;
<
< return( (gamma_th /* <- Should be local polytropic Gamma factor */ - 1.) * ( W * gtmp - D * sqrt(gtmp) ) / gamma_th /* <- Should be local polytropic Gamma factor */ );
---
> // Compute gamma^{-2} = 1 - v^{2} and gamma^{-1}
> CCTK_REAL inv_gammasq = 1.0 - vsq;
> CCTK_REAL inv_gamma = sqrt(inv_gammasq);
>
> // Compute rho_b = D / gamma
> CCTK_REAL rho_b = D*inv_gamma;
>
> // Compute P_cold and eps_cold
> CCTK_REAL P_cold, eps_cold;
> compute_P_cold__eps_cold(eos,rho_b, P_cold,eps_cold);
>
> // Compute p = P_{cold} + P_{th}
> return( ( P_cold + (Gamma_th - 1.0)*( W*inv_gammasq - D*inv_gamma*( 1.0 + eps_cold ) ) )/Gamma_th );
665a699
>
673a708,709
>
> #ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
675c711,713
< return( (gamma_th /* <- Should be local polytropic Gamma factor */ - 1.) * (1. - vsq) / gamma_th /* <- Should be local polytropic Gamma factor */ ) ;
---
> #endif
>
> return( (Gamma_th - 1.0) * (1.0 - vsq) / Gamma_th ) ;
678a717
>
685c724
< static CCTK_REAL dpdvsq_calc(CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D)
---
> static CCTK_REAL dpdvsq_calc(eos_struct eos, CCTK_REAL W, CCTK_REAL vsq, CCTK_REAL &D)
686a726,728
>
> // This sets Gamma_th
> #ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
688c730,772
< return( (gamma_th /* <- Should be local polytropic Gamma factor */ - 1.) * ( 0.5 * D / sqrt(1.-vsq) - W ) / gamma_th /* <- Should be local polytropic Gamma factor */ ) ;
---
> #endif
>
>
> // Set gamma and rho
> CCTK_REAL gamma = 1.0/sqrt(1.0 - vsq);
> CCTK_REAL rho_b = D/gamma;
>
> // Compute P_cold and eps_cold
> CCTK_REAL P_cold, eps_cold;
> compute_P_cold__eps_cold(eos,rho_b, P_cold,eps_cold);
>
> // Set basic polytropic quantities
> int polytropic_index = find_polytropic_K_and_Gamma_index(eos,rho_b);
> CCTK_REAL Gamma_ppoly_tab = eos.Gamma_ppoly_tab[polytropic_index];
>
>
> /* Now we implement the derivative of P_cold with respect
> * to v^{2}, given by
> * ----------------------------------------------------
> * | dP_cold/dvsq = gamma^{2 + Gamma_{poly}/2} P_{cold} |
> * ----------------------------------------------------
> */
> CCTK_REAL dPcold_dvsq = P_cold * pow(gamma,2.0 + 0.5*Gamma_ppoly_tab);
>
>
> /* Now we implement the derivative of eps_cold with respect
> * to v^{2}, given by
> * -----------------------------------------------------------------------------------
> * | deps_cold/dvsq = gamma/(D*(Gamma_ppoly_tab-1)) * (dP_cold/dvsq + gamma^{2} P_cold / 2) |
> * -----------------------------------------------------------------------------------
> */
> CCTK_REAL depscold_dvsq = ( gamma/(D*(Gamma_ppoly_tab-1.0)) ) * ( dPcold_dvsq + 0.5*gamma*gamma*P_cold );
>
> /* Now we implement the derivative of p_hybrid with respect
> * to v^{2}, given by
> * -----------------------------------------------------------------------------
> * | dp/dvsq = Gamma_th^{-1}( dP_cold/dvsq |
> * | + (Gamma_{th}-1)*(-W |
> * | + D gamma (1 + eps_cold)/2 |
> * | - (D/gamma) * deps_cold/dvsq) ) |
> * -----------------------------------------------------------------------------
> */
> return( ( dPcold_dvsq + (Gamma_th-1.0)*( -W + D*gamma*(1+eps_cold)/2.0 - D*depscold_dvsq/gamma ) )/Gamma_th );
694a779
> #endif
| BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-IllinoisGRMHD__harm_utoprim_2d.pdf](Tutorial-IllinoisGRMHD__harm_utoprim_2d.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means). | latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__harm_utoprim_2d.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__harm_utoprim_2d.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__harm_utoprim_2d.tex
!rm -f Tut*.out Tut*.aux Tut*.log | _____no_output_____ | BSD-2-Clause | IllinoisGRMHD/doc/Tutorial-IllinoisGRMHD__harm_utoprim_2d.ipynb | ksible/nrpytutorial |
T81-558: Applications of Deep Neural Networks**Module 8: Kaggle Data Sets*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 8 Material* Part 8.1: Introduction to Kaggle [[Video]](https://www.youtube.com/watch?v=v4lJBhdCuCU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_08_1_kaggle_intro.ipynb)* Part 8.2: Building Ensembles with Scikit-Learn and Keras [[Video]](https://www.youtube.com/watch?v=LQ-9ZRBLasw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_08_2_keras_ensembles.ipynb)* **Part 8.3: How Should you Architect Your Keras Neural Network: Hyperparameters** [[Video]](https://www.youtube.com/watch?v=1q9klwSoUQw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_3_keras_hyperparameters.ipynb)* Part 8.4: Bayesian Hyperparameter Optimization for Keras [[Video]](https://www.youtube.com/watch?v=sXdxyUCCm8s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb)* Part 8.5: Current Semester's Kaggle [[Video]](https://www.youtube.com/watch?v=PHQt0aUasRg&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_08_5_kaggle_project.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. | # Startup CoLab
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s) | Note: not using Google CoLab
| Apache-2.0 | t81_558_class_08_3_keras_hyperparameters.ipynb | rserran/t81_558_deep_learning |
Siamese U-Net Quickstart 1. IntroductionThe Siamese U-Net is an improvement on the original U-Net architecture. It adds an additional additional encoder that encodes an additional frame other than the frame that we are trying to predict. See [this paper](https://pubmed.ncbi.nlm.nih.gov/31927473/). This repository contains an implementation of this network.If you need help using a function, you can always try running `help(whichever_interesting_function)` or just look at the source code. If you need help using a class (one that is directly under the `biu.siam_unet` directory), trying to understand the examples in this notebook probably will be more helpful than finding the documentation of that function.IMPORTANT: Two packages that depend on your hardware need to be installed manually before running biu. To install CUDA 11.1 which is officially supported by PyTorch, navigate to [its installation page](https://developer.nvidia.com/cuda-11.1.1-download-archive) and follow the instructions onscreen. Because PyTorch depends on your CUDA installation version, it will need to be installed manually as well, through [the official PyTorch website](https://pytorch.org/get-started/locally/). Select the correct distribution of CUDA on this webpage and run the command in your terminal. biu doesn't depend on a specific version of CUDA and has been tested with PyTorch 1.7.0+.Finally, to import the Siamese U-Net package, write `import biu.siam_unet as unet`. 2. Data preparationBecause Siam UNet requires an additional input for training, we need to utilize an additional frame and use the appropriate dataloader for that. For the purpose of this notebook, I will call the frame which we are trying to infer "current frame", and the frame which is before the current frame the "previous frame." If your input image is not a movie | from biu.siam_unet.helpers.generate_siam_unet_input_imgs import generate_coupled_image_from_self
from pathlib import Path
import os
# specify where the training data for vanilla u-net is located
training_data_loc = '/home/longyuxi/Documents/mount/deeptissue_training/training_data/amnioserosa/yokogawa/image'
training_data_loc = Path(training_data_loc)
# create a separate folder for storing Siam-UNet input images
siam_training_data_loc = training_data_loc.parent / "siam_image"
siam_training_data_loc.mkdir(exist_ok=True)
### multiprocessing accelerated, equivalent to
## for img in training_data_loc.glob('*.tif'):
## generate_coupled_image_from_self(str(img), str(siam_training_data_loc / img.name))
import multiprocessing
imglist = training_data_loc.glob('*.tif')
def handle_image(img):
generate_coupled_image_from_self(str(img), str(siam_training_data_loc / img.name))
p = multiprocessing.Pool(10)
_ = p.map(handle_image, imglist)
import tifffile
a = tifffile.imread('/home/longyuxi/Documents/mount/deeptissue_training/training_data/leading_edge/eCad/image/00.tif')
print(a.shape)
generate_coupled_image_from_self('/home/longyuxi/Documents/mount/deeptissue_training/training_data/leading_edge/eCad/image/00.tif', 'temp.tif') | _____no_output_____ | MIT | using_siam_unet.ipynb | danihae/bio-image-unet |
If you know which frame you drew the label with The dataloader in `siam_unet_cosh` takes an image that results from concatenating the previous frame with the current frame. If you already know which frame of which movie you want to train on, you can create this concatenated data using `generate_siam_unet_input_imgs.py`. | movie_dir = '/media/longyuxi/H is for HUGE/docmount backup/unet_pytorch/training_data/test_data/new_microscope/21B11-shgGFP-kin-18-bro4.tif' # change this
frame = 10 # change this
out_dir = './training_data/training_data/yokogawa/siam_data/image/' # change this
from biu.siam_unet.helpers.generate_siam_unet_input_imgs import generate_coupled_image
generate_coupled_image(movie_dir, frame, out_dir) | _____no_output_____ | MIT | using_siam_unet.ipynb | danihae/bio-image-unet |
If you don't know which frame you drew the label with If you have frames and labels, but you don't know which frame of which movie each frame comes from, you can use `find_frame_of_image`. This function takes your query and compares it against a list of tif files you specify through the parameter `search_space`. | image_name = f'./training_data/training_data/yokogawa/lateral_epidermis/image/83.tif'
razer_local_search_dir = '/media/longyuxi/H is for HUGE/docmount backup/all_movies'
tifs_names = ['21B11-shgGFP-kin-18-bro4', '21B25_shgGFP_kin_1_Pos0', '21C04_shgGFP_kin_2_Pos4', '21C26_shgGFP_Pos12', '21D16_shgGFPkin_Pos7']
search_space = [razer_local_search_dir + '/' + t + '.tif' for t in tifs_names]
from biu.siam_unet.helpers.find_frame_of_image import find_frame_of_image
find_frame_of_image(image_name, search_space=search_space)
| _____no_output_____ | MIT | using_siam_unet.ipynb | danihae/bio-image-unet |
This function not only outputs what it finds to stdout, but also creates a machine readable output, location of which specified by `machine_readable_output_filename`, about which frames it is highly confident with at locating (i.e. an MSE of < 1000 and matching frame numbers). This output can further be used by `generate_siam_unet_input_images.py`. | from biu.siam_unet.helpers.generate_siam_unet_input_imgs import utilize_search_result
utilize_search_result(f'./training_data/training_data/yokogawa/amnioserosa/search_result_mr.txt', f'./training_data/test_data/new_microscope', f'./training_data/training_data/yokogawa/amnioserosa/label/', f'./training_data/training_data/yokogawa/siam_amnioserosa_sanitize_test/')
| _____no_output_____ | MIT | using_siam_unet.ipynb | danihae/bio-image-unet |
Finally, organize the labels and images in a way similar to this shown. An example can be found at `training_data/lateral_epidermis/yokogawa_siam-u-net` ```training_data/lateral_epidermis/yokogawa_siam-u-net|├── image│ ├── 105.tif│ ├── 111.tif│ ├── 120.tif│ ├── 121.tif│ ├── 1.tif│ ├── 2.tif│ ├── 3.tif│ ├── 5.tif│ ├── 7.tif│ └── 83.tif└── label ├── 105.tif ├── 111.tif ├── 120.tif ├── 121.tif ├── 1.tif ├── 2.tif ├── 3.tif ├── 5.tif ├── 7.tif └── 83.tif``` 3. Training Training is simple. For example: | from biu.siam_unet import *
dataset = 'amnioserosa/old_scope'
base_dir = '/home/longyuxi/Documents/mount/deeptissue_training/training_data/'
# path to training data (images and labels with identical names in separate folders)
dir_images = f'{base_dir}/{dataset}/siam_image/'
dir_masks = f'{base_dir}/{dataset}/label/'
print('starting to create training dataset')
print(f'dir_images: {dir_images}')
print(f'dir_masks: {dir_masks}')
# create training data set
data = DataProcess([dir_images, dir_masks], data_path='../delete_this_data', dilate_mask=0, aug_factor=10, create=True, invert=False, clip_threshold=(0.2, 99.8), dim_out=(256, 256), shiftscalerotate=(0, 0, 0))
save_dir = f'/home/longyuxi/Documents/mount/trained_networks_new_siam/siam/{dataset}'
# create trainer
training = Trainer(data ,num_epochs=500 ,batch_size=12, load_weights=False, lr=0.0001, n_filter=32, save_iter=True, save_dir=save_dir)
training.start()
| _____no_output_____ | MIT | using_siam_unet.ipynb | danihae/bio-image-unet |
Note here that the value of the `n_filter` parameter is set to `32`. The network won't break with a different value of this, but you need to use the same value for the Predict part. 4. Predict Predicting is simple as well. Just swap in the parameters | # load package
from biu.siam_unet import *
import os
os.nice(10)
from biu.siam_unet.helpers import tif_to_mp4
base_dir = './'
out_dir = f'{base_dir}/predicted_out'
model = f'{base_dir}/models/siam_bce_amnio/model_epoch_100.pth'
tif_file = f'{base_dir}/training_data/test_data/new_microscope/21C04_shgGFP_kin_2_Pos4.tif'
result_file = f'{out_dir}/siam_bce_amnio_100_epochs_21C04_shgGFP_kin_2_Pos4.tif'
out_mp4_file = result_file[:-4] + '.mp4'
print('starting to predict file')
# predict file
predict = Predict(tif_file, result_file, model, invert=False, resize_dim=(512, 512), n_filter=32)
# convert to mp4
tif_to_mp4.convert_to_mp4(result_file, output_file=out_mp4_file, normalize_to_0_255=True) | _____no_output_____ | MIT | using_siam_unet.ipynb | danihae/bio-image-unet |
Additionally, to evaluate the model's performance with different losses, one can also train the model across different models | """
For each image in the training dataset, run siam unet to predict.
"""
from pathlib import *
from biu.siam_unet import *
import glob
import logging
def predict_all_training_data(image_folder_prefix, model_folder_prefix, model_loss_functions, datasets, output_directory):
image_folder_prefix = Path(image_folder_prefix)
model_folder_prefix = Path(model_folder_prefix)
datasets = [Path(d) for d in datasets]
output_directory = Path(output_directory)
for dataset in datasets:
for model_loss_function in model_loss_functions:
try:
current_model = Path(model_folder_prefix / model_loss_function / dataset / 'model.pth')
for image in glob.glob((str) (image_folder_prefix / dataset) + "/image/*.tif"):
image_name = image.split('/')[-1]
result_name = Path(output_directory / dataset / Path(image_name[:-4] + '_' + model_loss_function + '.tif'))
_ = Predict(image, result_name, current_model, invert=False, n_filter=32)
# _ = Predict(image, result_name, current_model, invert=False, resize_dim=None, n_filter=32)
except:
logging.error('{} in {} failed to execute'.format(model_loss_function, dataset))
if __name__ == '__main__':
# BEGIN Full dataset
folders = ["amnioserosa/yokogawa", "lateral_epidermis/40x", "lateral_epidermis/60x", "lateral_epidermis/yokogawa", "leading_edge/eCad", "leading_edge/myosin", "leading_edge/yokogawa_eCad", "nodes/old_scope", "nodes/yokogawa"]
model_loss_functions = ['siam_bce_dice','siam_logcoshtversky', 'siam_tversky', 'siam_logcoshtversky_08_02', 'siam_logcoshtversky_15_06', 'siam_logcoshtversky_02_08', "siam_logcoshtversky_06_15", 'siam_tversky_08_02', 'siam_tversky_15_06']
# END Full dataset
# BEGIN Toy dataset
# folders = ["lateral_epidermis/40x"]
# model_loss_functions = ['siam_bce_dice','siam_logcoshtversky', 'siam_tversky', 'siam_logcoshtversky_08_02', 'siam_logcoshtversky_15_06']
# END Toy dataset
predict_all_training_data(image_folder_prefix='/home/longyuxi/Documents/mount/deeptissue_training/training_data', model_loss_functions=model_loss_functions, model_folder_prefix='/home/longyuxi/Documents/mount/trained_networks', datasets=folders, output_directory='/home/longyuxi/Documents/mount/deeptissue_test/output_new_shape') | _____no_output_____ | MIT | using_siam_unet.ipynb | danihae/bio-image-unet |
EvaluaciónCompleta lo que falta. | # instalacion
!pip install pandas
!pip install matplotlib
!pip install pandas-datareader
# 1 importa las bibliotecas
import pandas as pd
import pandas_datareader.data as web
import matplotlib.pyplot as plt
# 2. Establecer una fecha de inicio "2020-01-01" y una fecha de finalización "2021-08-31"
start_date =
end_date =
# 3.Usar el método del lector de datos para almacenar los datos
# del precio de las acciones de facebook ('FB') en un DataFrame llamado data.
# https://finance.yahoo.com/quote/FB/history?p=FB
data = web.DataReader(name='FB', data_source='yahoo', start=start_date, end=end_date)
data
# La salida se ve igual a la que leemos en cualquier archivo CSV.
# 4. Explica el resultado. | _____no_output_____ | BSD-3-Clause | evaluacion_leslytapia.ipynb | LESLYTAPIA/training-python-novice |
* Entender los movimientos del precio, si suben o bajan.* Los precios de las acciones se mueven constantemente a lo largo del día de trading a medida que la oferta y la demanda de acciones cambian (precio mas alto o mas bajo). Cuando el mercado cierra, se registra el precio final de la acción.* EL precio de Apertura: Precio con el que un Valor inicia sus transacciones en una sesión bursátil. Normalmente este precio no tiene gran diferencia con el precio de cierre (salvo algun acontecimiento importante).* El precio de cierre: Es la última cotización que registró durante el día en el mercado bursátil de un determinado título financiero. Nos podemos referir a la acción de una empresa, un índice, la moneda local u otro activo similar.* El precio de cierre ajustado representa el precio de cierre preciso basado en acciones corporativas. Por ejemplo, si el precio de cierre de las acciones de la empresa ABC era de USD 21.90 pero se pagaron dividendos de 100 centimos por accion, el precio de cierre se ajustara a USD 20.90.* El volumen mide la cantidad de acciones que se han comprado y vendido en un periodo determinado para una accion en concreto en este caso (FB). Se debe analizar el volumen en relacion a los volumenes anteriores, si suben o bajan. | # 5. Muestre un resumen de la información básica sobre este DataFrame y sus datos
# use la funcion dataFrame.info() y dataFrame.describe()
# 6. Devuelve las primeras 5 filas del DataFrame con dataFrame.head() o dataFrame.iloc[]
# 7. Seleccione solo las columnas 'Open','Close' y 'Volume' del DataFrame con dataFrame.loc
data.loc[:, ['', '', '']]
# Ver el rango de lo datos
data.index.min(), data.index.max()
# 8. Ahora grafica los datos de "Close" usando la biblioteca matplotlib en Python,
# 9. Agrega title, marker, linestyle y color para mejorar la visualizacion
close = data['']
ax = close.plot(title='Facebook', linestyle='', color='')
ax.set_xlabel('')
ax.set_ylabel('')
ax.grid() #opcional
plt.show()
# 10. Explica la grafica sencilla de linea | _____no_output_____ | BSD-3-Clause | evaluacion_leslytapia.ipynb | LESLYTAPIA/training-python-novice |
PyTorch CIFAR-10 local training PrerequisitesThis notebook shows how to use the SageMaker Python SDK to run your code in a local container before deploying to SageMaker's managed training or hosting environments. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. Just change your estimator's `train_instance_type` to `local` (or `local_gpu` if you're using an ml.p2 or ml.p3 notebook instance).In order to use this feature, you'll need to install docker-compose (and nvidia-docker if training with a GPU).**Note: you can only run a single local notebook at one time.** | !/bin/bash ./setup.sh | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb | nigenda-amazon/amazon-sagemaker-examples |
OverviewThe **SageMaker Python SDK** helps you deploy your models for training and hosting in optimized, productions ready containers in SageMaker. The SageMaker Python SDK is easy to use, modular, extensible and compatible with TensorFlow, MXNet, PyTorch. This tutorial focuses on how to create a convolutional neural network model to train the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) using **PyTorch in local mode**. Set up the environmentThis notebook was created and tested on a single ml.p2.xlarge notebook instance.Let's start by specifying:- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with appropriate full IAM role arn string(s). | import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/DEMO-pytorch-cnn-cifar10'
role = sagemaker.get_execution_role()
import os
import subprocess
instance_type = "local"
try:
if subprocess.call("nvidia-smi") == 0:
## Set type to GPU if one is present
instance_type = "local_gpu"
except:
pass
print("Instance type = " + instance_type) | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb | nigenda-amazon/amazon-sagemaker-examples |
Download the CIFAR-10 dataset | from utils_cifar import get_train_data_loader, get_test_data_loader, imshow, classes
trainloader = get_train_data_loader()
testloader = get_test_data_loader() | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb | nigenda-amazon/amazon-sagemaker-examples |
Data Preview | import numpy as np
import torchvision, torch
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%9s' % classes[labels[j]] for j in range(4))) | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb | nigenda-amazon/amazon-sagemaker-examples |
Upload the dataWe use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use this later when we start the training job. | inputs = sagemaker_session.upload_data(path='data', bucket=bucket, key_prefix='data/cifar10') | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb | nigenda-amazon/amazon-sagemaker-examples |
Construct a script for training Here is the full code for the network model: | !pygmentize source/cifar10.py | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb | nigenda-amazon/amazon-sagemaker-examples |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.