markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Finding the most important node i.e character in these networks.Let's use our network analysis knowledge to decrypt these Graphs that we have just created.Is it Jon Snow, Tyrion, Daenerys, or someone else? Let's see! Network Science offers us many different metrics to measure the importance of a node in a network as we saw in the first part of the tutorial. Note that there is no "correct" way of calculating the most important node in a network, every metric has a different meaning.First, let's measure the importance of a node in a network by looking at the number of neighbors it has, that is, the number of nodes it is connected to. For example, an influential account on Twitter, where the follower-followee relationship forms the network, is an account which has a high number of followers. This measure of importance is called degree centrality.Using this measure, let's extract the top ten important characters from the first book (`graphs[0]`) and the fifth book (`graphs[4]`).NOTE: We are using zero-indexing and that's why the graph of the first book is acceseed by `graphs[0]`. | # We use the in-built degree_centrality method
deg_cen_book1 = nx.degree_centrality(graphs[0])
deg_cen_book5 = nx.degree_centrality(graphs[4]) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
`degree_centrality` returns a dictionary and to access the results we can directly use the name of the character. | deg_cen_book1['Daenerys-Targaryen'] | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
Top 5 important characters in the first book according to degree centrality. | # The following expression sorts the dictionary by
# degree centrality and returns the top 5 from a graph
sorted(deg_cen_book1.items(),
key=lambda x:x[1],
reverse=True)[0:5] | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
Top 5 important characters in the fifth book according to degree centrality. | sorted(deg_cen_book5.items(),
key=lambda x:x[1],
reverse=True)[0:5] | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
To visualize the distribution of degree centrality let's plot a histogram of degree centrality. | plt.hist(deg_cen_book1.values(), bins=30)
plt.show() | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
The above plot shows something that is expected, a high portion of characters aren't connected to lot of other characters while some characters are highly connected all through the network. A close real world example of this is a social network like Twitter where a few people have millions of connections(followers) but majority of users aren't connected to that many other users. This exponential decay like property resembles power law in real life networks. | # A log-log plot to show the "signature" of power law in graphs.
from collections import Counter
hist = Counter(deg_cen_book1.values())
plt.scatter(np.log2(list(hist.keys())),
np.log2(list(hist.values())),
alpha=0.9)
plt.show() | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
ExerciseCreate a new centrality measure, weighted_degree(Graph, weight) which takes in Graph and the weight attribute and returns a weighted degree dictionary. Weighted degree is calculated by summing the weight of the all edges of a node and find the top five characters according to this measure. | from nams.solutions.got import weighted_degree
plt.hist(list(weighted_degree(graphs[0], 'weight').values()), bins=30)
plt.show()
sorted(weighted_degree(graphs[0], 'weight').items(), key=lambda x:x[1], reverse=True)[0:5] | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
Betweeness centralityLet's do this for Betweeness centrality and check if this makes any difference. As different centrality method use different measures underneath, they find nodes which are important in the network. A centrality method like Betweeness centrality finds nodes which are structurally important to the network, which binds the network together and densely. | # First check unweighted (just the structure)
sorted(nx.betweenness_centrality(graphs[0]).items(),
key=lambda x:x[1], reverse=True)[0:10]
# Let's care about interactions now
sorted(nx.betweenness_centrality(graphs[0],
weight='weight_inv').items(),
key=lambda x:x[1], reverse=True)[0:10] | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
We can see there are some differences between the unweighted and weighted centrality measures. Another thing to note is that we are using the weight_inv attribute instead of weight(the number of interactions between characters). This decision is based on the way we want to assign the notion of "importance" of a character. The basic idea behind betweenness centrality is to find nodes which are essential to the structure of the network. As betweenness centrality computes shortest paths underneath, in the case of weighted betweenness centrality it will end up penalising characters with high number of interactions. By using weight_inv we will prop up the characters with high interactions with other characters. PageRankThe billion dollar algorithm, PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.NOTE: We don't need to worry about weight and weight_inv in PageRank as the algorithm uses weights in the opposite sense (larger weights are better). This may seem confusing as different centrality measures have different definition of weights. So it is always better to have a look at documentation before using weights in a centrality measure. | # by default weight attribute in PageRank is weight
# so we use weight=None to find the unweighted results
sorted(nx.pagerank_numpy(graphs[0],
weight=None).items(),
key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.pagerank_numpy(
graphs[0], weight='weight').items(),
key=lambda x:x[1], reverse=True)[0:10] | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
Exercise Is there a correlation between these techniques?Find the correlation between these four techniques.- pagerank (weight = 'weight')- betweenness_centrality (weight = 'weight_inv')- weighted_degree- degree centralityHINT: Use pandas correlation | from nams.solutions.got import correlation_centrality
correlation_centrality(graphs[0]) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
Evolution of importance of characters over the booksAccording to degree centrality the most important character in the first book is Eddard Stark but he is not even in the top 10 of the fifth book. The importance changes over the course of five books, because you know stuff happens ;)Let's look at the evolution of degree centrality of a couple of characters like Eddard Stark, Jon Snow, Tyrion which showed up in the top 10 of degree centrality in first book.We create a dataframe with character columns and index as books where every entry is the degree centrality of the character in that particular book and plot the evolution of degree centrality Eddard Stark, Jon Snow and Tyrion.We can see that the importance of Eddard Stark in the network dies off and with Jon Snow there is a drop in the fourth book but a sudden rise in the fifth book | evol = [nx.degree_centrality(graph)
for graph in graphs]
evol_df = pd.DataFrame.from_records(evol).fillna(0)
evol_df[['Eddard-Stark',
'Tyrion-Lannister',
'Jon-Snow']].plot()
plt.show()
set_of_char = set()
for i in range(5):
set_of_char |= set(list(
evol_df.T[i].sort_values(
ascending=False)[0:5].index))
set_of_char | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
ExercisePlot the evolution of betweenness centrality of the above mentioned characters over the 5 books. | from nams.solutions.got import evol_betweenness
evol_betweenness(graphs) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
So what's up with Stannis Baratheon? | sorted(nx.degree_centrality(graphs[4]).items(),
key=lambda x:x[1], reverse=True)[:5]
sorted(nx.betweenness_centrality(graphs[4]).items(),
key=lambda x:x[1], reverse=True)[:5]
nx.draw(nx.barbell_graph(5, 1), with_labels=True) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
As we know the a higher betweenness centrality means that the node is crucial for the structure of the network, and in the case of Stannis Baratheon in the fifth book it seems like Stannis Baratheon has characterstics similar to that of node 5 in the above example as it seems to be the holding the network together.As evident from the betweenness centrality scores of the above example of barbell graph, node 5 is the most important node in this network. | nx.betweenness_centrality(nx.barbell_graph(5, 1)) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
Community detection in NetworksA network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. There are multiple algorithms and definitions to calculate these communites in a network.We will use louvain community detection algorithm to find the modules in our graph. | import nxviz as nv
from nxviz import annotate
plt.figure(figsize=(8, 8))
partition = community.best_partition(graphs[0], randomize=False)
# Annotate nodes' partitions
for n in graphs[0].nodes():
graphs[0].nodes[n]["partition"] = partition[n]
graphs[0].nodes[n]["degree"] = graphs[0].degree(n)
nv.matrix(graphs[0], group_by="partition", sort_by="degree", node_color_by="partition")
annotate.matrix_block(graphs[0], group_by="partition", color_by="partition")
annotate.matrix_group(graphs[0], group_by="partition", offset=-8) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
A common defining quality of a community is thatthe within-community edges are denser than the between-community edges. | # louvain community detection find us 8 different set of communities
partition_dict = {}
for character, par in partition.items():
if par in partition_dict:
partition_dict[par].append(character)
else:
partition_dict[par] = [character]
len(partition_dict)
partition_dict[2] | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
If we plot these communities of the network we see a denser network as compared to the original network which contains all the characters. | nx.draw(nx.subgraph(graphs[0], partition_dict[3]))
nx.draw(nx.subgraph(graphs[0],partition_dict[1])) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
We can test this by calculating the density of the network and the community.Like in the following example the network between characters in a community is 5 times more dense than the original network. | nx.density(nx.subgraph(
graphs[0], partition_dict[4])
)/nx.density(graphs[0]) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
Exercise Find the most important node in the partitions according to degree centrality of the nodes using the partition_dict we have already created. | from nams.solutions.got import most_important_node_in_partition
most_important_node_in_partition(graphs[0], partition_dict) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
SolutionsHere are the solutions to the exercises above. | from nams.solutions import got
import inspect
print(inspect.getsource(got)) | _____no_output_____ | MIT | notebooks/05-casestudies/01-gameofthrones.ipynb | khanin-th/Network-Analysis-Made-Simple |
SICP 习题 (2.8) 解题总结:区间的减法 SICP 习题 2.8 需要我们完成区间运算的减法,区间运算的加法书中已经有了,代码如下: | (define (add-interval x y)
(make-interval (+ (lower-bound x) (lower-bound y))
(+ (upper-bound x) (upper-bound y)))) | _____no_output_____ | MIT | cn/.ipynb_checkpoints/sicp-2-08-checkpoint.ipynb | DamonDeng/sicp_exercise |
以上代码很简单,就是计算区间的加法时将两个区间的起点相加,称为新区间的起点,然后将两个区间的终点相加,成为新区间的终点。减法时加法的逆运算,我们看着加法的代码照猫画虎一番就可以了,代码如下: | (define (sub-interval x y)
(make-interval (- (lower-bound x) (lower-bound y))
(- (upper-bound x) (upper-bound y)))) | _____no_output_____ | MIT | cn/.ipynb_checkpoints/sicp-2-08-checkpoint.ipynb | DamonDeng/sicp_exercise |
Learning Tree Structure from Data using the Chow-Liu Algorithm In this notebook, we show an example for learning the structure of a Bayesian Network using the Chow-Liu algorithm. We will first build a model to generate some data and then attempt to learn the model's graph structure back from the generated data. First, create a tree graph | import networkx as nx
import matplotlib.pyplot as plt
from pgmpy.models import BayesianNetwork
# construct the tree graph structure
model = BayesianNetwork([('A', 'B'), ('A', 'C'), ('B', 'D'), ('B', 'E'), ('C', 'F')])
nx.draw_circular(model, with_labels=True, arrowsize=30, node_size=800, alpha=0.3, font_weight='bold')
plt.show()
| _____no_output_____ | MIT | examples/Structure Learning with Chow-Liu.ipynb | vbob/pgmpy |
Then, add CPDs to our tree to create a Bayesian network | from pgmpy.factors.discrete import TabularCPD
# add CPD to each edge
cpd_a = TabularCPD('A', 2, [[0.4], [0.6]])
cpd_b = TabularCPD('B', 3, [[0.6,0.2],[0.3,0.5],[0.1,0.3]], evidence=['A'], evidence_card=[2])
cpd_c = TabularCPD('C', 2, [[0.3,0.4],[0.7,0.6]], evidence=['A'], evidence_card=[2])
cpd_d = TabularCPD('D', 3, [[0.5,0.3,0.1],[0.4,0.4,0.8],[0.1,0.3,0.1]], evidence=['B'], evidence_card=[3])
cpd_e = TabularCPD('E', 2, [[0.3,0.5,0.2],[0.7,0.5,0.8]], evidence=['B'], evidence_card=[3])
cpd_f = TabularCPD('F', 3, [[0.3,0.6],[0.5,0.2],[0.2,0.2]], evidence=['C'], evidence_card=[2])
model.add_cpds(cpd_a, cpd_b, cpd_c, cpd_d, cpd_e, cpd_f)
| _____no_output_____ | MIT | examples/Structure Learning with Chow-Liu.ipynb | vbob/pgmpy |
Next, generate sample data from our tree Bayesian network | from pgmpy.sampling import BayesianModelSampling
# sample data from BN
inference = BayesianModelSampling(model)
df_data = inference.forward_sample(size=10000)
print(df_data)
| Generating for node: D: 100%|██████████| 6/6 [00:00<00:00, 275.41it/s] | MIT | examples/Structure Learning with Chow-Liu.ipynb | vbob/pgmpy |
Finally, apply the Chow-Liu algorithm to learn the tree graph from sample data | from pgmpy.estimators import TreeSearch
# learn graph structure
est = TreeSearch(df_data, root_node="A")
dag = est.estimate(estimator_type="chow-liu")
nx.draw_circular(dag, with_labels=True, arrowsize=30, node_size=800, alpha=0.3, font_weight='bold')
plt.show()
| Building tree: 100%|██████████| 15/15.0 [00:00<00:00, 4518.10it/s]
| MIT | examples/Structure Learning with Chow-Liu.ipynb | vbob/pgmpy |
To parameterize the learned graph from data, check out the other tutorials for more info | from pgmpy.estimators import BayesianEstimator
# there are many choices of parametrization, here is one example
model = BayesianNetwork(dag.edges())
model.fit(df_data, estimator=BayesianEstimator, prior_type='dirichlet', pseudo_counts=0.1)
model.get_cpds() | _____no_output_____ | MIT | examples/Structure Learning with Chow-Liu.ipynb | vbob/pgmpy |
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Inference PyTorch GPT2 Model with ONNX Runtime on CPUIn this tutorial, you'll be introduced to how to load a GPT2 model from PyTorch, convert it to ONNX, and inference it using ONNX Runtime.**Note: this work is still in progresss. Need install ort_nightly package before onnxruntime 1.3.0 is ready. The performance number of ort_nightly does not reflect the final result for onnxruntime 1.3.0. ** Prerequisites If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:```consoleconda create -n cpu_env python=3.6conda activate cpu_envconda install pytorch torchvision cpuonly -c pytorchpip install onnxruntimepip install transformers==2.5.1pip install onnx psutil pytz pandas py-cpuinfo py3nvml netronconda install jupyterjupyter notebook```The last command will launch Jupyter Notebook and we can open this notebook in browser to continue. | # Enable pass state in input.
enable_past_input = False
import os
cache_dir = "./gpt2"
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
output_dir = './gpt2_onnx'
if not os.path.exists(output_dir):
os.makedirs(output_dir) | _____no_output_____ | MIT | onnxruntime/python/tools/bert/notebooks/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb | lienching/onnxruntime |
Benchmark You will need git clone the onnxruntime repository like```consolegit clone https://github.com/microsoft/onnxruntime.git```Then update the bert_tools_dir according to the path in your machine. | # Assume you have git clone the repository of onnxruntime from github.
bert_tools_dir = r'D:\Git\onnxruntime\onnxruntime\python\tools\bert'
benchmark_script = os.path.join(bert_tools_dir, 'benchmark_gpt2.py')
if enable_past_input:
%run $benchmark_script --model_type gpt2 --cache_dir $cache_dir --output_dir $output_dir --enable_optimization --enable_past_input
else:
%run $benchmark_script --model_type gpt2 --cache_dir $cache_dir --output_dir $output_dir --enable_optimization | _____no_output_____ | MIT | onnxruntime/python/tools/bert/notebooks/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb | lienching/onnxruntime |
If you only need the benchmark results. You can skip the remaining parts.In the following, we will introduce the benchmark script. Load pretrained model | from transformers import GPT2Model, GPT2Tokenizer
model_class, tokenizer_class, model_name_or_path = (GPT2Model, GPT2Tokenizer, 'gpt2')
tokenizer = tokenizer_class.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model = model_class.from_pretrained(model_name_or_path, cache_dir=cache_dir)
model.eval().cpu()
import numpy
import time
def pytorch_inference(model, input_ids, past=None, total_runs = 100):
latency = []
with torch.no_grad():
for _ in range(total_runs):
start = time.time()
outputs = model(input_ids=input_ids, past=past)
latency.append(time.time() - start)
if total_runs > 1:
print("PyTorch Inference time = {} ms".format(format(sum(latency) * 1000 / len(latency), '.2f')))
return outputs
def onnxruntime_inference(ort_session, input_ids, past=None, total_runs=100):
# Use contiguous array as input might improve performance.
# You can check the results from performance test tool to see whether you need it.
ort_inputs = {
'input_ids': numpy.ascontiguousarray(input_ids.cpu().numpy())
}
if past is not None:
for i, past_i in enumerate(past):
ort_inputs[f'past_{i}'] = numpy.ascontiguousarray(past[i].cpu().numpy())
latency = []
for _ in range(total_runs):
start = time.time()
ort_outputs = ort_session.run(None, ort_inputs)
latency.append(time.time() - start)
if total_runs > 1:
print("OnnxRuntime Inference time = {} ms".format(format(sum(latency) * 1000 / len(latency), '.2f')))
return ort_outputs
def inference(model, ort_session, input_ids, past=None, total_runs=100, verify_outputs=True):
outputs = pytorch_inference(model, input_ids, past, total_runs)
ort_outputs = onnxruntime_inference(ort_session, input_ids, past, total_runs)
if verify_outputs:
print('PyTorch and OnnxRuntime output 0 (last_state) are close:'.format(0), numpy.allclose(ort_outputs[0], outputs[0].cpu(), rtol=1e-05, atol=1e-04))
if enable_past_input:
for layer in range(model.config.n_layer):
print('PyTorch and OnnxRuntime layer {} state (present_{}) are close:'.format(layer, layer), numpy.allclose(ort_outputs[1 + layer], outputs[1][layer].cpu(), rtol=1e-05, atol=1e-04))
import torch
import os
inputs = tokenizer.encode_plus("Here is an example input for GPT2 model", add_special_tokens=True, return_tensors='pt')
input_ids = inputs['input_ids']
# run without past so that we can know the shape of past from output.
outputs = model(input_ids=input_ids, past=None)
num_layer = model.config.n_layer
present_names = [f'present_{i}' for i in range(num_layer)]
output_names = ["last_state"] + present_names
input_names = ['input_ids']
dynamic_axes= {'input_ids': {0: 'batch_size', 1: 'seq_len'},
#'token_type_ids' : {0: 'batch_size', 1: 'seq_len'},
#'attention_mask' : {0: 'batch_size', 1: 'seq_len'},
'last_state' : {0: 'batch_size', 1: 'seq_len'}
}
for name in present_names:
dynamic_axes[name] = {1: 'batch_size', 3: 'seq_len'}
if enable_past_input:
past_names = [f'past_{i}' for i in range(num_layer)]
input_names = ['input_ids'] + past_names #+ ['token_type_ids', 'attention_mask']
dummy_past = [torch.zeros(list(outputs[1][0].shape)) for _ in range(num_layer)]
for name in past_names:
dynamic_axes[name] = {1: 'batch_size', 3: 'seq_len'}
export_inputs = (inputs['input_ids'], tuple(dummy_past)) #, inputs['token_type_ids'], inputs['attention_mask'])
else:
export_inputs = (inputs['input_ids'])
export_model_path = os.path.join(output_dir, 'gpt2_past{}.onnx'.format(int(enable_past_input)))
torch.onnx.export(model,
args=export_inputs,
f=export_model_path,
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
opset_version=11,
do_constant_folding = True,
verbose=False)
def remove_past_outputs(export_model_path, output_model_path):
from onnx import ModelProto
from OnnxModel import OnnxModel
model = ModelProto()
with open(export_model_path, "rb") as f:
model.ParseFromString(f.read())
bert_model = OnnxModel(model)
# remove past state outputs and only keep the first output.
keep_output_names = [bert_model.model.graph.output[0].name]
logger.info(f"Prune graph to keep the first output and drop past state outputs:{keep_output_names}")
bert_model.prune_graph(keep_output_names)
bert_model.save_model_to_file(output_model_path)
if enable_past_input:
onnx_model_path = export_model_path
else:
onnx_model_path = os.path.join(output_dir, 'gpt2_past{}_out1.onnx'.format(int(enable_past_input)))
remove_past_outputs(export_model_path, onnx_model_path) | _____no_output_____ | MIT | onnxruntime/python/tools/bert/notebooks/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb | lienching/onnxruntime |
Inference with ONNX Runtime OpenMP Environment VariableOpenMP environment variables are very important for CPU inference of GPT2 model. It has large performance impact on GPT2 model so you might need set it carefully according to benchmark script.Setting environment variables shall be done before importing onnxruntime. Otherwise, they might not take effect. | import psutil
# You may change the settings in this cell according to Performance Test Tool result.
use_openmp = True
# ATTENTION: these environment variables must be set before importing onnxruntime.
if use_openmp:
os.environ["OMP_NUM_THREADS"] = str(psutil.cpu_count(logical=True))
else:
os.environ["OMP_NUM_THREADS"] = '1'
os.environ["OMP_WAIT_POLICY"] = 'ACTIVE'
import onnxruntime
import numpy
# Print warning if user uses onnxruntime-gpu instead of onnxruntime package.
if 'CUDAExecutionProvider' in onnxruntime.get_available_providers():
print("warning: onnxruntime-gpu is not built with OpenMP. You might try onnxruntime package to test CPU inference.")
sess_options = onnxruntime.SessionOptions()
# Optional: store the optimized graph and view it using Netron to verify that model is fully optimized.
# Note that this will increase session creation time, so it is for debugging only.
#sess_options.optimized_model_filepath = os.path.join(output_dir, "optimized_model_cpu.onnx")
if use_openmp:
sess_options.intra_op_num_threads=1
else:
sess_options.intra_op_num_threads=psutil.cpu_count(logical=True)
# Specify providers when you use onnxruntime-gpu for CPU inference.
session = onnxruntime.InferenceSession(onnx_model_path, sess_options, providers=['CPUExecutionProvider'])
# Compare PyTorch and OnnxRuntime inference performance and results
%time inference(model, session, input_ids, past=dummy_past if enable_past_input else None)
import gc
del session
gc.collect()
optimized_model = os.path.join(output_dir, 'gpt2_past{}_optimized.onnx'.format(int(enable_past_input)))
bert_opt_script = os.path.join(bert_tools_dir, 'bert_model_optimization.py')
# Local directory corresponding to https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/bert/
%run $bert_opt_script --model_type gpt2 --input $onnx_model_path --output $optimized_model --opt_level 0
session = onnxruntime.InferenceSession(optimized_model, sess_options, providers=['CPUExecutionProvider'])
%time inference(model, session, input_ids, past=dummy_past if enable_past_input else None, verify_outputs=False) | _____no_output_____ | MIT | onnxruntime/python/tools/bert/notebooks/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb | lienching/onnxruntime |
Additional InfoNote that running Jupyter Notebook has slight impact on performance result since Jupyter Notebook is using system resources like CPU and memory etc. It is recommended to close Jupyter Notebook and other applications, then run the benchmark script in a console to get more accurate performance numbers.[OnnxRuntime C API](https://github.com/microsoft/onnxruntime/blob/master/docs/C_API.md) could get slightly better performance than python API. If you use C API in inference, you can use OnnxRuntime_Perf_Test.exe built from source to measure performance instead.Here is the machine configuration that generated the above results. The machine has GPU but not used in CPU inference.You might get slower or faster result based on your hardware. | machine_info_script = os.path.join(bert_tools_dir, 'MachineInfo.py')
%run $machine_info_script --silent | _____no_output_____ | MIT | onnxruntime/python/tools/bert/notebooks/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb | lienching/onnxruntime |
Tuberculosis WHO General setup.___ | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = [12, 8] | _____no_output_____ | MIT | Data cleaning/Tuberculosis.ipynb | olgarozhdestvina/Data-Science-and-Machine-Learning |
Load the data set.___ | # Load the data set
tb = pd.read_csv('../Data/tuberculosis.csv')
tb.head()
tb.tail() | _____no_output_____ | MIT | Data cleaning/Tuberculosis.ipynb | olgarozhdestvina/Data-Science-and-Machine-Learning |
There are several issues with the data set: * Missing values* Confusing names (for example, m04 means male 0-4 years old) | tb.columns
# Plot the columns from m04 to fu for the last row in the data set.
plt.plot(tb.loc[5768, 'm04':'fu'])
plt.show()
# And now the same as above for all rows
for _, row in tb.iterrows():
plt.plot(row['m04':'fu'], color='C0', alpha=0.1) | _____no_output_____ | MIT | Data cleaning/Tuberculosis.ipynb | olgarozhdestvina/Data-Science-and-Machine-Learning |
Data cleaning.___ | # Melt columns from m04 to fu into a sex and cases columns
tb_melt = tb.melt(tb.columns[:2], tb.columns[2:], 'sex_age', 'cases')
tb_melt
# Created a new column 'age' from 'sex'
tb_melt['age'] = tb_melt.sex_age.apply(lambda x: x[1:])
tb_melt['age']
def age_format(x):
""" Reformatting age column """
if len(x) == 1:
return ''
elif len(x) in [2,3]:
if x == '65':
return '65+'
return f'{x[0]}-{x[1:]}'
return f'{x[:2]}-{x[2:]}'
# Apply the function to
tb_melt['age'] = tb_melt.age.apply(lambda x: age_format(x))
# Remove age from 'sex' column
tb_melt['sex'] = tb_melt.sex_age.apply(lambda x: x[0])
# Drop all empty values
tb_melt.dropna(inplace=True)
tb_melt.head()
# Reset the index and drop the column index
final = tb_melt.sort_values(['country', 'year', 'age', 'sex', 'cases']).reset_index()
final.drop(['index', 'sex_age'], axis=1, inplace=True)
final.head()
# Rearrange the column order
final = final[['country', 'year', 'age', 'sex', 'cases']]
final.head()
# Output to csv file
final.to_csv('data/final_tb.csv', index=False) | _____no_output_____ | MIT | Data cleaning/Tuberculosis.ipynb | olgarozhdestvina/Data-Science-and-Machine-Learning |
Simple Linear Regression. Minimal example Using the same code as before, please solve the following exercises 1. Change the number of observations to 100,000 and see what happens. 2. Change the number of observations to 1,000,000 and see what happens. 3. Play around with the learning rate. Values like 0.0001, 0.001, 0.1, 1 are all interesting to observe. 4. Change the loss function. L2-norm loss (without dividing by 2) is a good way to start. 5. Тry with the L1-norm loss, given by the sum of the ABSOLUTE value of yj - tj. The L1-norm loss is given by: $$ \Sigma_i = |y_i-t_i| $$ 6. Create a function f(x,z) = 13*xs + 7*zs - 12. Does the algorithm work in the same way? Useful tip: When you change something, don't forget to RERUN all cells. This can be done easily by clicking:Kernel -> Restart & Run AllIf you don't do that, your algorithm will keep the OLD values of all parameters.You can either use this file for all the exercises, or check the solutions of EACH ONE of them in the separate files we have provided. All other files are solutions of each problem. If you feel confident enough, you can simply change values in this file. Please note that it will be nice, if you return the file to starting position after you have solved a problem, so you can use the lecture as a basis for comparison. Import the relevant libraries | # We must always import the relevant libraries for our problem at hand. NumPy is a must for this example.
import numpy as np
# matplotlib and mpl_toolkits are not necessary. We employ them for the sole purpose of visualizing the results.
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D | _____no_output_____ | MIT | course_2/course_material/Part_7_Deep_Learning/S43_L300/Minimal_example_All_Exercises.ipynb | Alexander-Meldrum/learning-data-science |
Generate random input data to train on | # First, we should declare a variable containing the size of the training set we want to generate.
observations = 1000
# We will work with two variables as inputs. You can think about them as x1 and x2 in our previous examples.
# We have picked x and z, since it is easier to differentiate them.
# We generate them randomly, drawing from an uniform distribution. There are 3 arguments of this method (low, high, size).
# The size of xs and zs is observations by 1. In this case: 1000 x 1.
xs = np.random.uniform(low=-10, high=10, size=(observations,1))
zs = np.random.uniform(-10, 10, (observations,1))
# Combine the two dimensions of the input into one input matrix.
# This is the X matrix from the linear model y = x*w + b.
# column_stack is a Numpy method, which combines two vectors into a matrix. Alternatives are stack, dstack, hstack, etc.
inputs = np.column_stack((xs,zs))
# Check if the dimensions of the inputs are the same as the ones we defined in the linear model lectures.
# They should be n x k, where n is the number of observations, and k is the number of variables, so 1000 x 2.
print (inputs.shape) | (1000, 2)
| MIT | course_2/course_material/Part_7_Deep_Learning/S43_L300/Minimal_example_All_Exercises.ipynb | Alexander-Meldrum/learning-data-science |
Generate the targets we will aim at | # We want to "make up" a function, use the ML methodology, and see if the algorithm has learned it.
# We add a small random noise to the function i.e. f(x,z) = 2x - 3z + 5 + <small noise>
noise = np.random.uniform(-1, 1, (observations,1))
# Produce the targets according to the f(x,z) = 2x - 3z + 5 + noise definition.
# In this way, we are basically saying: the weights should be 2 and -3, while the bias is 5.
targets = 2*xs - 3*zs + 5 + noise
# Check the shape of the targets just in case. It should be n x m, where m is the number of output variables, so 1000 x 1.
print (targets.shape) | (1000, 1)
| MIT | course_2/course_material/Part_7_Deep_Learning/S43_L300/Minimal_example_All_Exercises.ipynb | Alexander-Meldrum/learning-data-science |
Plot the training dataThe point is to see that there is a strong trend that our model should learn to reproduce. Initialize variables | # We will initialize the weights and biases randomly in some small initial range.
# init_range is the variable that will measure that.
# You can play around with the initial range, but we don't really encourage you to do so.
# High initial ranges may prevent the machine learning algorithm from learning.
init_range = 0.1
# Weights are of size k x m, where k is the number of input variables and m is the number of output variables
# In our case, the weights matrix is 2x1 since there are 2 inputs (x and z) and one output (y)
weights = np.random.uniform(low=-init_range, high=init_range, size=(2, 1))
# Biases are of size 1 since there is only 1 output. The bias is a scalar.
biases = np.random.uniform(low=-init_range, high=init_range, size=1)
#Print the weights to get a sense of how they were initialized.
print (weights)
print (biases) | [[0.02158668]
[0.04520037]]
[0.07680059]
| MIT | course_2/course_material/Part_7_Deep_Learning/S43_L300/Minimal_example_All_Exercises.ipynb | Alexander-Meldrum/learning-data-science |
Set a learning rate | # Set some small learning rate (denoted eta in the lecture).
# 0.02 is going to work quite well for our example. Once again, you can play around with it.
# It is HIGHLY recommended that you play around with it.
learning_rate = 0.02 | _____no_output_____ | MIT | course_2/course_material/Part_7_Deep_Learning/S43_L300/Minimal_example_All_Exercises.ipynb | Alexander-Meldrum/learning-data-science |
Train the model | # We iterate over our training dataset 100 times. That works well with a learning rate of 0.02.
# The proper number of iterations is something we will talk about later on, but generally
# a lower learning rate would need more iterations, while a higher learning rate would need less iterations
# keep in mind that a high learning rate may cause the loss to diverge to infinity, instead of converge to 0.
for i in range (100):
# This is the linear model: y = xw + b equation
outputs = np.dot(inputs,weights) + biases
# The deltas are the differences between the outputs and the targets
# Note that deltas here is a vector 1000 x 1
deltas = outputs - targets
# We are considering the L2-norm loss, but divided by 2, so it is consistent with the lectures.
# Moreover, we further divide it by the number of observations.
# This is simple rescaling by a constant. We explained that this doesn't change the optimization logic,
# as any function holding the basic property of being lower for better results, and higher for worse results
# can be a loss function.
loss = np.sum(deltas ** 2) / 2 / observations
# We print the loss function value at each step so we can observe whether it is decreasing as desired.
print (loss)
# Another small trick is to scale the deltas the same way as the loss function
# In this way our learning rate is independent of the number of samples (observations).
# Again, this doesn't change anything in principle, it simply makes it easier to pick a single learning rate
# that can remain the same if we change the number of training samples (observations).
# You can try solving the problem without rescaling to see how that works for you.
deltas_scaled = deltas / observations
# Finally, we must apply the gradient descent update rules from the relevant lecture.
# The weights are 2x1, learning rate is 1x1 (scalar), inputs are 1000x2, and deltas_scaled are 1000x1
# We must transpose the inputs so that we get an allowed operation.
weights = weights - learning_rate * np.dot(inputs.T,deltas_scaled)
biases = biases - learning_rate * np.sum(deltas_scaled)
# The weights are updated in a linear algebraic way (a matrix minus another matrix)
# The biases, however, are just a single number here, so we must transform the deltas into a scalar.
# The two lines are both consistent with the gradient descent methodology. | 237249.78007243446
5739587.921816474
1992902565.643724
719554963540.3586
259830306790502.12
9.382439477181739e+16
3.387987017559843e+19
1.2233978230391737e+22
4.417674051463696e+24
1.5952165074557885e+27
5.76030661387594e+29
2.0800394260452784e+32
7.510996035316175e+34
2.7122111598526613e+37
9.793760163154793e+39
3.536514396563739e+42
1.2770308715701512e+45
4.611342310744724e+47
1.6651498707090051e+50
6.01283510326613e+52
2.1712271438771583e+55
7.840273730021682e+57
2.8311129185637116e+60
1.0223112908630776e+63
3.691553129418609e+65
1.333015161733698e+68
4.813500873795554e+70
1.7381490719052497e+73
6.276434294656793e+75
2.2664124781865262e+78
8.183986767219839e+80
2.955227261174523e+83
1.067128822858039e+86
3.853388670087579e+88
1.3914537705945411e+91
5.024521950592025e+93
1.8143485155956026e+96
6.551589521180396e+98
2.3657706821531062e+101
8.54276798392397e+103
3.084782704329477e+106
1.1139111293713626e+109
4.022318986669404e+111
1.452454294055924e+114
5.2447940685786336e+116
1.89388849854841e+119
6.83880739269138e+121
2.4694846919539974e+124
8.917277960353747e+126
3.2200177827096285e+129
1.162744344974406e+132
4.198655110010857e+134
1.516129044962873e+137
5.474722788016975e+139
1.9769154680607224e+142
7.138616728525449e+144
2.5777454635818573e+147
9.30820623618128e+149
3.361181488217665e+152
1.2137183803280195e+155
4.382721706370037e+157
1.582595259890179e+160
5.714731448694445e+162
2.063582291593845e+165
7.451569531150132e+167
2.69075232442893e+170
9.716272580096515e+172
3.508533728416333e+175
1.266927087724081e+178
4.574857675184804e+180
1.6519753149958247e+183
5.965261949368312e+185
2.154048538229604e+188
7.778242002499892e+190
2.8087133402842874e+193
1.0142228315029865e+196
3.6623458050646816e+198
1.322468433884333e+201
4.775416773047163e+203
1.724396951337346e+206
6.226775561380224e+208
2.24848077246553e+211
8.119235604866657e+213
2.9318456984201305e+216
1.0586857701471766e+219
3.822900913632967e+221
1.3804446803345712e+224
4.984768265032192e+226
1.7999935100658833e+229
6.499753777938913e+231
2.3470528608897527e+234
8.475178168299134e+236
3.0603761074722623e+239
1.1050979381436075e+242
3.9904946647160495e+244
1.4409625716863748e+247
5.203297604580957e+249
1.878904177931045e+252
6.784699200635173e+254
2.4499462923004373e+257
| MIT | course_2/course_material/Part_7_Deep_Learning/S43_L300/Minimal_example_All_Exercises.ipynb | Alexander-Meldrum/learning-data-science |
Print weights and biases and see if we have worked correctly. | # We print the weights and the biases, so we can see if they have converged to what we wanted.
# When declared the targets, following the f(x,z), we knew the weights should be 2 and -3, while the bias: 5.
print (weights, biases)
# Note that they may be convergING. So more iterations are needed. | [[-1.53536772e+125 -1.53536772e+125 -1.53536772e+125 ... -1.53536772e+125
-1.53536772e+125 -1.53536772e+125]
[-1.52680119e+124 -1.52680119e+124 -1.52680119e+124 ... -1.52680119e+124
-1.52680119e+124 -1.52680119e+124]] [-4.2058021e+128]
| MIT | course_2/course_material/Part_7_Deep_Learning/S43_L300/Minimal_example_All_Exercises.ipynb | Alexander-Meldrum/learning-data-science |
Plot last outputs vs targetsSince they are the last ones at the end of the training, they represent the final model accuracy. The closer this plot is to a 45 degree line, the closer target and output values are. | # We print the outputs and the targets in order to see if they have a linear relationship.
# Again, that's not needed. Moreover, in later lectures, that would not even be possible.
plt.plot(outputs,targets)
plt.xlabel('outputs')
plt.ylabel('targets')
plt.show() | _____no_output_____ | MIT | course_2/course_material/Part_7_Deep_Learning/S43_L300/Minimal_example_All_Exercises.ipynb | Alexander-Meldrum/learning-data-science |
Sea $f(x)=e^x$ | def f(x):
z = np.cos(x) + np.sin(3*x) + np.cos(np.sqrt(x)) + np.cos(18*x)
return z
f = lambda x: np.exp(x) | _____no_output_____ | MIT | codes/clase_14/interp_error_lineal.ipynb | mlares/computacion2020 |
y una partición regular en el intervalo $[0, 1]$ donde se construye el polinomio interpolante de orden $n$, $P_n(x)$. Interpolación de Newton con N puntos: | N = 30
xd = np.linspace(2, 10, N)
yd = f(xd)
xi = np.linspace(min(xd), max(xd), 200)
ym = f(xi) | _____no_output_____ | MIT | codes/clase_14/interp_error_lineal.ipynb | mlares/computacion2020 |
_______ | yl = it.interp_newton(xi, xd, yd)
fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot()
ax.plot(xi, yl, linewidth=1.4, linestyle='-', color='orchid',
label='lagrange')
ax.plot(xd, yd, marker='o', linestyle='None', color='navy', markersize=5)
ax.grid()
ax.legend()
fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot()
ax.plot(xi, yl-ym, linewidth=3, color='indigo')
ax.plot(xd, [0]*len(xd), marker='o', linestyle='None', color='navy',
markersize=8, mfc='white', mec='indigo', mew=2)
ax.set_ylabel('ERROR')
ax.set_xlabel('x')
ax.grid() | _____no_output_____ | MIT | codes/clase_14/interp_error_lineal.ipynb | mlares/computacion2020 |
Veamos juntos los errores para diferentes N | fig, axs = plt.subplots(5, 4, figsize=[15, 18])
for N, ax in zip(range(6, 66, 3), axs.flat):
xd = np.linspace(2, 10, N)
yd = f(xd)
xi = np.linspace(min(xd), max(xd), 200)
ym = f(xi)
ylgg = it.interp_lagrange(xi, xd, yd)
mx = max(ylgg-ym)
ylin = np.interp(xi, xd, yd)
spline = interp1d(xd, yd, kind='cubic')
ysp3 = spline(xi)
#ax.plot(xi, ylgg-ym, linewidth=2, color='cornflowerblue', label='lagrange')
ax.plot(xi, ylin-ym, linewidth=2, color='peru', label='lineal')
ax.plot(xi, ysp3-ym, linewidth=2, color='mediumaquamarine', linestyle=':', label='cubic spline')
ax.set_title(f'N={N}; max={mx:5.1e}')
ax.legend()
ax.axhline(0, linestyle='--', color='k') | _____no_output_____ | MIT | codes/clase_14/interp_error_lineal.ipynb | mlares/computacion2020 |
Exercice 1: Write a Python class named square constructed by a length and two methods which will compute the area and the perimeter of the square. | class square():
#define your methods
def __init__(self,longueur):
self.longueur = longueur
def aire_carree(self):
return self.longueur**2
def perimetre(self):
return self.longueur*4
square1 = square(5)
print('Aire est : \n',square1.aire_carree())
print('Perimetre est :\n',square1.perimetre()) | Aire est :
25
Perimetre est :
20
| MIT | exercices/part4.ipynb | AbdelwahabHassan/python-bootcamp |
Exercice 2: Write a python class rectangle that inherits from the square class. | class rectangle(square):
def __init__(self,longueur,largeur):
self.largeur = largeur
super().__init__(longueur)
square = rectangle(5,2)
print('Aire_R est : \n',square.aire_carree())
print('Perimetre_R est :\n',square.perimetre()) | Aire_R est :
25
Perimetre_R est :
20
| MIT | exercices/part4.ipynb | AbdelwahabHassan/python-bootcamp |
Exercice 3: | class SampleClass:
def __init__(self, a):
## private varibale in Python
self.a = a
@SampleClass
def work(x):
x = SampleClass(3)
return x
print('p1 --->',work.a)
work.a = 23
print('p2--->',work.a)
| p1 ---> <function work at 0x7f93280d45e0>
p2---> 23
| MIT | exercices/part4.ipynb | AbdelwahabHassan/python-bootcamp |
FloPy Using FloPy to simplify the use of the MT3DMS ```SSM``` packageA multi-component transport demonstration | import os
import sys
import numpy as np
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('flopy version: {}'.format(flopy.__version__)) | flopy is installed in /Users/jdhughes/Documents/Development/flopy_git/flopy_fork/flopy
3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 14:38:56)
[Clang 4.0.1 (tags/RELEASE_401/final)]
numpy version: 1.17.3
flopy version: 3.3.1
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
First, we will create a simple model structure | nlay, nrow, ncol = 10, 10, 10
perlen = np.zeros((10), dtype=np.float) + 10
nper = len(perlen)
ibound = np.ones((nlay,nrow,ncol), dtype=np.int)
botm = np.arange(-1,-11,-1)
top = 0. | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
Create the ```MODFLOW``` packages | model_ws = 'data'
modelname = 'ssmex'
mf = flopy.modflow.Modflow(modelname, model_ws=model_ws)
dis = flopy.modflow.ModflowDis(mf, nlay=nlay, nrow=nrow, ncol=ncol,
perlen=perlen, nper=nper, botm=botm, top=top,
steady=False)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=top)
lpf = flopy.modflow.ModflowLpf(mf, hk=100, vka=100, ss=0.00001, sy=0.1)
oc = flopy.modflow.ModflowOc(mf)
pcg = flopy.modflow.ModflowPcg(mf)
rch = flopy.modflow.ModflowRch(mf) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
We'll track the cell locations for the ```SSM``` data using the ```MODFLOW``` boundary conditions.Get a dictionary (```dict```) that has the ```SSM``` ```itype``` for each of the boundary types. | itype = flopy.mt3d.Mt3dSsm.itype_dict()
print(itype)
print(flopy.mt3d.Mt3dSsm.get_default_dtype())
ssm_data = {} | {'CHD': 1, 'BAS6': 1, 'PBC': 1, 'WEL': 2, 'DRN': 3, 'RIV': 4, 'GHB': 5, 'MAS': 15, 'CC': -1}
[('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('css', '<f4'), ('itype', '<i8')]
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
Add a general head boundary (```ghb```). The general head boundary head (```bhead```) is 0.1 for the first 5 stress periods with a component 1 (comp_1) concentration of 1.0 and a component 2 (comp_2) concentration of 100.0. Then ```bhead``` is increased to 0.25 and comp_1 concentration is reduced to 0.5 and comp_2 concentration is increased to 200.0 | ghb_data = {}
print(flopy.modflow.ModflowGhb.get_default_dtype())
ghb_data[0] = [(4, 4, 4, 0.1, 1.5)]
ssm_data[0] = [(4, 4, 4, 1.0, itype['GHB'], 1.0, 100.0)]
ghb_data[5] = [(4, 4, 4, 0.25, 1.5)]
ssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)]
for k in range(nlay):
for i in range(nrow):
ghb_data[0].append((k, i, 0, 0.0, 100.0))
ssm_data[0].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0))
ghb_data[5] = [(4, 4, 4, 0.25, 1.5)]
ssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)]
for k in range(nlay):
for i in range(nrow):
ghb_data[5].append((k, i, 0, -0.5, 100.0))
ssm_data[5].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0)) | [('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('bhead', '<f4'), ('cond', '<f4')]
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
Add an injection ```well```. The injection rate (```flux```) is 10.0 with a comp_1 concentration of 10.0 and a comp_2 concentration of 0.0 for all stress periods. WARNING: since we changed the ```SSM``` data in stress period 6, we need to add the well to the ssm_data for stress period 6. | wel_data = {}
print(flopy.modflow.ModflowWel.get_default_dtype())
wel_data[0] = [(0, 4, 8, 10.0)]
ssm_data[0].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0))
ssm_data[5].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0)) | [('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('flux', '<f4')]
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
Add the ```GHB``` and ```WEL``` packages to the ```mf``` ```MODFLOW``` object instance. | ghb = flopy.modflow.ModflowGhb(mf, stress_period_data=ghb_data)
wel = flopy.modflow.ModflowWel(mf, stress_period_data=wel_data) | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
Create the ```MT3DMS``` packages | mt = flopy.mt3d.Mt3dms(modflowmodel=mf, modelname=modelname, model_ws=model_ws)
btn = flopy.mt3d.Mt3dBtn(mt, sconc=0, ncomp=2, sconc2=50.0)
adv = flopy.mt3d.Mt3dAdv(mt)
ssm = flopy.mt3d.Mt3dSsm(mt, stress_period_data=ssm_data)
gcg = flopy.mt3d.Mt3dGcg(mt) | found 'rch' in modflow model, resetting crch to 0.0
SSM: setting crch for component 2 to zero. kwarg name crch2
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
Let's verify that ```stress_period_data``` has the right ```dtype``` | print(ssm.stress_period_data.dtype) | [('k', '<i8'), ('i', '<i8'), ('j', '<i8'), ('css', '<f4'), ('itype', '<i8'), ('cssm(01)', '<f4'), ('cssm(02)', '<f4')]
| CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
Create the ```SEAWAT``` packages | swt = flopy.seawat.Seawat(modflowmodel=mf, mt3dmodel=mt,
modelname=modelname, namefile_ext='nam_swt', model_ws=model_ws)
vdf = flopy.seawat.SeawatVdf(swt, mtdnconc=0, iwtable=0, indense=-1)
mf.write_input()
mt.write_input()
swt.write_input() | _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
And finally, modify the ```vdf``` package to fix ```indense```. | fname = modelname + '.vdf'
f = open(os.path.join(model_ws, fname),'r')
lines = f.readlines()
f.close()
f = open(os.path.join(model_ws, fname),'w')
for line in lines:
f.write(line)
for kper in range(nper):
f.write("-1\n")
f.close()
| _____no_output_____ | CC0-1.0 | examples/Notebooks/flopy3_multi-component_SSM.ipynb | aleaf/flopy |
The ``Tabulator`` widget allows displaying and editing a pandas DataFrame. The `Tabulator` is a largely backward compatible replacement for the [`DataFrame`](./DataFrame.ipynb) widget and will eventually replace it. It is built on the [Tabulator](http://tabulator.info/) library, which provides for a wide range of features.For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb). Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb). Core* **``aggregators``** (``dict``): A dictionary mapping from index name to an aggregator to be used for `hierarchical` multi-indexes (valid aggregators include 'min', 'max', 'mean' and 'sum'). If separate aggregators for different columns are required the dictionary may be nested as `{index_name: {column_name: aggregator}}`* **``configuration``** (``dict``): A dictionary mapping used to specify tabulator options not explicitly exposed by panel.* **``editors``** (``dict``): A dictionary mapping from column name to a bokeh `CellEditor` instance or tabulator editor specification.* **``embed_content``** (``boolean``): Whether to embed the `row_content` or to dynamically fetch it when a row is expanded.* **``expanded``** (``list``): The currently expanded rows as a list of integer indexes.* **``filters``** (``list``): A list of client-side filter definitions that are applied to the table.* **``formatters``** (``dict``): A dictionary mapping from column name to a bokeh `CellFormatter` instance or tabulator formatter specification.* **``groupby``** (`list`): Groups rows in the table by one or more columns.* **``header_filters``** (``boolean``/``dict``): A boolean enabling filters in the column headers or a dictionary providing filter definitions for specific columns.* **``hierarchical``** (boolean, default=False): Whether to render multi-indexes as hierarchical index (note hierarchical must be enabled during instantiation and cannot be modified later)* **``hidden_columns``** (`list`): List of columns to hide.* **``layout``** (``str``, `default='fit_data_table'`): Describes the column layout mode with one of the following options `'fit_columns'`, `'fit_data'`, `'fit_data_stretch'`, `'fit_data_fill'`, `'fit_data_table'`. * **``frozen_columns``** (`list`): List of columns to freeze, preventing them from scrolling out of frame. Column can be specified by name or index.* **``frozen_rows``**: (`list`): List of rows to freeze, preventing them from scrolling out of frame. Rows can be specified by positive or negative index.* **``page``** (``int``, `default=1`): Current page, if pagination is enabled.* **``page_size``** (``int``, `default=20`): Number of rows on each page, if pagination is enabled.* **``pagination``** (`str`, `default=None`): Set to `'local` or `'remote'` to enable pagination; by default pagination is disabled with the value set to `None`.* **``row_content``** (``callable``): A function that receives the expanded row as input and should return a Panel object to render into the expanded region below the row.* **``row_height``** (``int``, `default=30`): The height of each table row.* **``selection``** (``list``): The currently selected rows as a list of integer indexes.* **``selectable``** (`boolean` or `str` or `int`, `default=True`): Defines the selection mode: * `True` Selects rows on click. To select multiple use Ctrl-select, to select a range use Shift-select * `False` Disables selection * `'checkbox'` Adds a column of checkboxes to toggle selections * `'checkbox-single'` Same as 'checkbox' but header does not alllow select/deselect all * `'toggle'` Selection toggles when clicked * `int` The maximum number of selectable rows.* **``selectable_rows``** (`callable`): A function that should return a list of integer indexes given a DataFrame indicating which rows may be selected.* **``show_index``** (``boolean``, `default=True`): Whether to show the index column.* **``text_align``** (``dict`` or ``str``): A mapping from column name to alignment or a fixed column alignment, which should be one of `'left'`, `'center'`, `'right'`.* **`theme`** (``str``, `default='simple'`): The CSS theme to apply (note that changing the theme will restyle all tables on the page), which should be one of `'default'`, `'site'`, `'simple'`, `'midnight'`, `'modern'`, `'bootstrap'`, `'bootstrap4'`, `'materialize'`, `'bulma'`, `'semantic-ui'`, or `'fast'`.* **``titles``** (``dict``): A mapping from column name to a title to override the name with.* **``value``** (``pd.DataFrame``): The pandas DataFrame to display and edit* **``widths``** (``dict``): A dictionary mapping from column name to column width in the rendered table. Display* **``disabled``** (``boolean``): Whether the widget is editable* **``name``** (``str``): The title of the widget Properties* **``current_view``** (``DataFrame``): The current view of the table that is displayed, i.e. after sorting and filtering are applied* **``selected_dataframe``** (``DataFrame``): A DataFrame reflecting the currently selected rows.___ The ``Tabulator`` widget renders a DataFrame using an interactive grid, which allows directly editing the contents of the dataframe in place, with any changes being synced with Python. The `Tabulator` will usually determine the appropriate formatter appropriately based on the type of the data: | df = pd.DataFrame({
'int': [1, 2, 3],
'float': [3.14, 6.28, 9.42],
'str': ['A', 'B', 'C'],
'bool': [True, False, True],
'date': [dt.date(2019, 1, 1), dt.date(2020, 1, 1), dt.date(2020, 1, 10)]
}, index=[1, 2, 3])
df_widget = pn.widgets.Tabulator(df)
df_widget | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
FormattersBy default the widget will pick bokeh ``CellFormatter`` and ``CellEditor`` types appropriate to the dtype of the column. These may be overriden by explicit dictionaries mapping from the column name to the editor or formatter instance. For example below we create a ``SelectEditor`` instance to pick from four options in the ``str`` column and a ``NumberFormatter`` to customize the formatting of the float values: | from bokeh.models.widgets.tables import NumberFormatter, BooleanFormatter
bokeh_formatters = {
'float': NumberFormatter(format='0.00000'),
'bool': BooleanFormatter(),
}
pn.widgets.Tabulator(df, formatters=bokeh_formatters) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
The list of valid Bokeh formatters includes: * [BooleanFormatter](https://docs.bokeh.org/en/latest/docs/reference/models/widgets.tables.htmlbokeh.models.widgets.tables.BooleanFormatter)* [DateFormatter](https://docs.bokeh.org/en/latest/docs/reference/models/widgets.tables.htmlbokeh.models.widgets.tables.DateFormatter)* [NumberFormatter](https://docs.bokeh.org/en/latest/docs/reference/models/widgets.tables.htmlbokeh.models.widgets.tables.NumberFormatter)* [HTMLTemplateFormatter](https://docs.bokeh.org/en/latest/docs/reference/models/widgets.tables.htmlbokeh.models.widgets.tables.HTMLTemplateFormatter)* [StringFormatter](https://docs.bokeh.org/en/latest/docs/reference/models/widgets.tables.htmlbokeh.models.widgets.tables.StringFormatter)* [ScientificFormatter](https://docs.bokeh.org/en/latest/docs/reference/models/widgets.tables.htmlbokeh.models.widgets.tables.ScientificFormatter)However in addition to the formatters exposed by Bokeh it is also possible to provide valid formatters built into the Tabulator library. These may be defined either as a string or as a dictionary declaring the 'type' and other arguments, which are passed to Tabulator as the `formatterParams`: | tabulator_formatters = {
'float': {'type': 'progress', 'max': 10},
'bool': {'type': 'tickCross'}
}
pn.widgets.Tabulator(df, formatters=tabulator_formatters) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
The list of valid Tabulator formatters can be found in the [Tabulator documentation](http://tabulator.info/docs/4.9/formatformat-builtin). EditorsJust like the formatters, the `Tabulator` will natively understand the Bokeh `Editor` types. However, in the background it will replace most of them with equivalent editors natively supported by the tabulator library: | from bokeh.models.widgets.tables import CheckboxEditor, NumberEditor, SelectEditor, DateEditor, TimeEditor
bokeh_editors = {
'float': NumberEditor(),
'bool': CheckboxEditor(),
'str': SelectEditor(options=['A', 'B', 'C', 'D']),
}
pn.widgets.Tabulator(df[['float', 'bool', 'str']], editors=bokeh_editors) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Therefore it is often preferable to use one of the [Tabulator editors](http://tabulator.info/docs/4.9/editedit) directly: | from bokeh.models.widgets.tables import CheckboxEditor, NumberEditor, SelectEditor
bokeh_editors = {
'float': {'type': 'number', 'max': 10, 'step': 0.1},
'bool': {'type': 'tickCross', 'tristate': True, 'indeterminateValue': None},
'str': {'type': 'autocomplete', 'values': True}
}
pn.widgets.Tabulator(df[['float', 'bool', 'str']], editors=bokeh_editors) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Column layoutsBy default the DataFrame widget will adjust the sizes of both the columns and the table based on the contents, reflecting the default value of the parameter: `layout="fit_data_table"`. Alternative modes allow manually specifying the widths of the columns, giving each column equal widths, or adjusting just the size of the columns. Manual column widthsTo manually adjust column widths provide explicit `widths` for each of the columns: | custom_df = pd._testing.makeMixedDataFrame()
pn.widgets.Tabulator(custom_df, widths={'index': 70, 'A': 50, 'B': 50, 'C': 70, 'D': 130}) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
You can also declare a single width for all columns this way: | pn.widgets.Tabulator(custom_df, widths=130) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Autosize columnsTo automatically adjust the columns dependending on their content set `layout='fit_data'`: | pn.widgets.Tabulator(custom_df, layout='fit_data', width=400) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
To ensure that the table fits all the data but also stretches to fill all the available space, set `layout='fit_data_stretch'`: | pn.widgets.Tabulator(custom_df, layout='fit_data_stretch', width=400) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
The `'fit_data_fill'` option on the other hand won't stretch the last column but still fill the space: | pn.widgets.Tabulator(custom_df, layout='fit_data_fill', width=400) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Perhaps the most useful of these options is `layout='fit_data_table'` (and therefore the default) since this will automatically size both the columns and the table: | pn.widgets.Tabulator(custom_df, layout='fit_data_table') | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Equal sizeThe simplest option is simply to allocate each column equal amount of size: | pn.widgets.Tabulator(custom_df, layout='fit_columns', width=650) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
StylingThe ability to style the contents of a table based on its content and other considerations is very important. Thankfully pandas provides a powerful [styling API](https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html), which can be used in conjunction with the `Tabulator` widget. Specifically the `Tabulator` widget exposes a `.style` attribute just like a `pandas.DataFrame` which lets the user apply custom styling using methods like `.apply` and `.applymap`. For a detailed guide to styling see the [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html).Here we will demonstrate with a simple example, starting with a basic table: | style_df = pd.DataFrame(np.random.randn(10, 5), columns=list('ABCDE'))
styled = pn.widgets.Tabulator(style_df) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Next we define two functions which apply styling cell-wise (`color_negative_red`) and column-wise (`highlight_max`), which we then apply to the `Tabulator` using the `.style` API and then display the `styled` table: | def color_negative_red(val):
"""
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
"""
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
styled.style.applymap(color_negative_red).apply(highlight_max)
styled | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
ThemingThe Tabulator library ships with a number of themes, which are defined as CSS stylesheets. For that reason changing the theme on one table will affect all Tables on the page and it will usually be preferable to see the theme once at the class level like this:```pythonpn.widgets.Tabulator.theme = 'default'```For a full list of themes see the [Tabulator documentation](http://tabulator.info/docs/4.9/theme), however the default themes include:- `'simple'`- `'default'`- `'midnight'`- `'site'`- `'modern'`- `'bootstrap'`- `'bootstrap4'`- `'materialize'`- `'semantic-ui'`- `'bulma'` SelectionThe `selection` parameter controls which rows in the table are selected and can be set from Python and updated by selecting rows on the frontend: | sel_df = pd.DataFrame(np.random.randn(10, 5), columns=list('ABCDE'))
select_table = pn.widgets.Tabulator(sel_df, selection=[0, 3, 7])
select_table | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Once initialized, the ``selection`` parameter will return the integer indexes of the selected rows, while the ``selected_dataframe`` property will return a new DataFrame containing just the selected rows: | select_table.selection = [1, 4, 9]
select_table.selected_dataframe | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
The `selectable` parameter declares how the selections work. - `True`: Selects rows on click. To select multiple use Ctrl-select, to select a range use Shift-select- `False`: Disables selection- `'checkbox'`: Adds a column of checkboxes to toggle selections- `'checkbox-single'`: Same as `'checkbox'` but disables (de)select-all in the header- `'toggle'`: Selection toggles when clicked- Any positive `int`: A number that sets the maximum number of selectable rows | pn.widgets.Tabulator(sel_df, selection=[0, 3, 7], selectable='checkbox') | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Additionally we can also disable selection for specific rows by providing a `selectable_rows` function. The function must accept a DataFrame and return a list of integer indexes indicating which rows are selectable, e.g. here we disable selection for every second row: | pn.widgets.Tabulator(sel_df, selectable_rows=lambda df: list(range(0, len(df), 2))) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Freezing rows and columnsSometimes your table will be larger than can be displayed in a single viewport, in which case scroll bars will be enabled. In such cases, you might want to make sure that certain information is always visible. This is where the `frozen_columns` and `frozen_rows` options come in. Frozen columnsWhen you have a large number of columns and can't fit them all on the screen you might still want to make sure that certain columns do not scroll out of view. The `frozen_columns` option makes this possible by specifying a list of columns that should be frozen, e.g. `frozen_columns=['index']` will freeze the index column: | wide_df = pd._testing.makeCustomDataframe(10, 10, r_idx_names=['index'])
pn.widgets.Tabulator(wide_df, frozen_columns=['index'], width=400) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Frozen rowsAnother common scenario is when you have certain rows with special meaning, e.g. aggregates that summarize the information in the rest of the table. In this case you may want to freeze those rows so they do not scroll out of view. You can achieve this by setting a list of `frozen_rows` by integer index (which can be positive or negative, where negative values are relative to the end of the table): | date_df = pd._testing.makeTimeDataFrame().iloc[:10]
agg_df = pd.concat([date_df, date_df.median().to_frame('Median').T, date_df.mean().to_frame('Mean').T])
agg_df.index= agg_df.index.map(str)
pn.widgets.Tabulator(agg_df, frozen_rows=[-2, -1], width=400) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Row contentsA table can only display so much information without becoming difficult to scan. We may want to render additional information to a table row to provide additional context. To make this possible you can provide a `row_content` function which is given the table row as an argument and should return a panel object that will be rendered into an expanding region below the row. By default the contents are fetched dynamically whenever a row is expanded, however using the `embed_content` parameter we can embed all the content.Below we create a periodic table of elements where the Wikipedia page for each element will be rendered into the expanded region: | from bokeh.sampledata.periodic_table import elements
periodic_df = elements[['atomic number', 'name', 'atomic mass', 'metal', 'year discovered']].set_index('atomic number')
content_fn = lambda row: pn.pane.HTML(
f'<iframe src="http://en.wikipedia.org/wiki/{row["name"]}?printable=yes" width="100%" height="300px"></iframe>',
sizing_mode='stretch_width'
)
periodic_table = pn.widgets.Tabulator(
periodic_df, height=500, layout='fit_columns', sizing_mode='stretch_width',
row_content=content_fn, embed_content=True
)
periodic_table | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
The currently expanded rows can be accessed (and set) on the `expanded` parameter: | periodic_table.expanded | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
GroupingAnother useful option is the ability to group specific rows together, which can be achieved using `groups` parameter. The `groups` parameter should be composed of a dictionary mapping from the group titles to the column names: | pn.widgets.Tabulator(date_df, groups={'Group 1': ['A', 'B'], 'Group 2': ['C', 'D']}) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
GroupbyIn addition to grouping columns we can also group rows by the values along one or more columns: | from bokeh.sampledata.autompg import autompg
pn.widgets.Tabulator(autompg, groupby=['yr', 'origin'], height=240) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Hierarchical Multi-indexThe `Tabulator` widget can also render a hierarchical multi-index and aggregate over specific categories. If a DataFrame with a hierarchical multi-index is supplied and the `hierarchical` is enabled the widget will group data by the categories in the order they are defined in. Additionally for each group in the multi-index an aggregator may be provided which will aggregate over the values in that category.For example we may load population data for locations around the world broken down by sex and age-group. If we specify aggregators over the 'AgeGrp' and 'Sex' indexes we can see the aggregated values for each of those groups (note that we do not have to specify an aggregator for the outer index since we specify the aggregators over the subgroups in this case the 'Sex'): | from bokeh.sampledata.population import data as population_data
pop_df = population_data[population_data.Year == 2020].set_index(['Location', 'AgeGrp', 'Sex'])[['Value']]
pn.widgets.Tabulator(value=pop_df, hierarchical=True, aggregators={'Sex': 'sum', 'AgeGrp': 'sum'}, height=400) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
PaginationWhen working with large tables we sometimes can't send all the data to the browser at once. In these scenarios we can enable pagination, which will fetch only the currently viewed data from the server backend. This may be enabled by setting `pagination='remote'` and the size of each page can be set using the `page_size` option: | large_df = pd._testing.makeCustomDataframe(100000, 5)
%%time
paginated_table = pn.widgets.Tabulator(large_df, pagination='remote', page_size=10)
paginated_table | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Contrary to the `'remote'` option, `'local'` pagination entirely loads the data but still allows to display it on multiple pages. | %%time
paginated_table = pn.widgets.Tabulator(large_df, pagination='local', page_size=10)
paginated_table | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
FilteringA very common scenario is that you want to attach a number of filters to a table in order to view just a subset of the data. You can achieve this through callbacks or other reactive approaches but the `.add_filter` method makes it much easier. Constant and Widget filtersThe simplest approach to filtering is to select along a column with a constant or dynamic value. The `.add_filter` method allows passing in constant values, widgets and parameters. If a widget or parameter is provided the table will watch the object for changes in the value and update the data in response. The filtering will depend on the type of the constant or dynamic value:- scalar: Filters by checking for equality- `tuple`: A tuple will be interpreted as range.- `list`/`set`: A list or set will be interpreted as a set of discrete scalars and the filter will check if the values in the column match any of the items in the list.As an example we will create a DataFrame with some data of mixed types: | mixed_df = pd._testing.makeMixedDataFrame()
filter_table = pn.widgets.Tabulator(mixed_df)
filter_table | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Now we will start adding filters one-by-one, e.g. to start with we add a filter for the `'A'` column, selecting a range from 0 to 3: | filter_table.add_filter((0, 3), 'A') | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Next we add dynamic widget based filter, a `RangeSlider` which allows us to further narrow down the data along the `'A'` column: | slider = pn.widgets.RangeSlider(start=0, end=3, name='A Filter')
filter_table.add_filter(slider, 'A') | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Lastly we will add a `MultiSelect` filter along the `'C'` column: | select = pn.widgets.MultiSelect(options=['foo1', 'foo2', 'foo3', 'foo4', 'foo5'], name='C Filter')
filter_table.add_filter(select, 'C') | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Now let's display the table alongside the widget based filters: | pn.Row(
pn.Column(slider, select),
filter_table
) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
After filtering you can inspect the current view with the `current_view` property: | filter_table.current_view | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Function based filtering For more complex filtering tasks you can supply a function that should accept the DataFrame to be filtered as the first argument and must return a filtered copy of the data. Let's start by loading some data. | import sqlite3
from bokeh.sampledata.movies_data import movie_path
con = sqlite3.Connection(movie_path)
movies_df = pd.read_sql('SELECT Title, Year, Genre, Director, Writer, imdbRating from omdb', con)
movies_df = movies_df[~movies_df.Director.isna()]
movies_table = pn.widgets.Tabulator(movies_df, pagination='remote', layout='fit_columns', width=800) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
By using the `pn.bind` function, which binds widget and parameter values to a function, complex filtering can be achieved. E.g. here we will add a filter function that uses tests whether the string or regex is contained in the 'Director' column of a listing of thousands of movies: | director_filter = pn.widgets.TextInput(name='Director filter', value='Chaplin')
def contains_filter(df, pattern, column):
if not pattern:
return df
return df[df[column].str.contains(pattern)]
movies_table.add_filter(pn.bind(contains_filter, pattern=director_filter, column='Director'))
pn.Row(director_filter, movies_table) | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Client-side filteringIn addition to the Python API the Tabulator widget also offers a client-side filtering API, which can be exposed through `header_filters` or by manually adding filters to the rendered Bokeh model. The API for declaring header filters is almost identical to the API for defining [Editors](Editors). The `header_filters` can either be enabled by setting it to `True` or by manually supplying filter types for each column. The types of filters supports all the same options as the editors, in fact if you do not declare explicit `header_filters` the tabulator will simply use the defined `editors` to determine the correct filter type: | bokeh_editors = {
'float': {'type': 'number', 'max': 10, 'step': 0.1},
'bool': {'type': 'tickCross', 'tristate': True, 'indeterminateValue': None},
'str': {'type': 'autocomplete', 'values': True}
}
header_filter_table = pn.widgets.Tabulator(
df[['float', 'bool', 'str']], height=140, width=400, layout='fit_columns',
editors=bokeh_editors, header_filters=True
)
header_filter_table | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
When a filter is applied client-side the `filters` parameter is synced with Python. The definition of `filters` looks something like this:```[{'field': 'Director', 'type': '=', 'value': 'Steven Spielberg'}]```Try applying a filter and then inspect the `filters` parameter: | header_filter_table.filters | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
For all supported filtering types see the [Tabulator Filtering documentation](http://tabulator.info/docs/4.9/filter).If we want to change the filter type for the `header_filters` we can do so in the definition by supplying a dictionary indexed by the column names and then either providing a dictionary which may define the `'type'`, a comparison `'func'`, a `'placeholder'` and any additional keywords supported by the particular filter type. | movie_filters = {
'Title': {'type': 'input', 'func': 'like', 'placeholder': 'Enter title'},
'Year': {'placeholder': 'Enter year'},
'Genre': {'type': 'input', 'func': 'like', 'placeholder': 'Enter genre'},
'Director': {'type': 'input', 'func': 'like', 'placeholder': 'Enter director'},
'Writer': {'type': 'input', 'func': 'like', 'placeholder': 'Enter writer'},
'imdbRating': {'type': 'number', 'func': '>=', 'placeholder': 'Enter minimum rating'}
}
filter_table = pn.widgets.Tabulator(
movies_df, pagination='remote', layout='fit_columns', page_size=10, sizing_mode='stretch_width',
header_filters=movie_filters
)
filter_table | _____no_output_____ | BSD-3-Clause | examples/reference/widgets/Tabulator.ipynb | datalayer-contrib/holoviz-panel |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.