markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.7. Number Of Advected Species Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.8. Number Of Steady State Species Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.9. Interactive Dry Deposition Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.10. Wet Deposition Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.11. Wet Oxidation Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14.2. Gas Phase Species Is Required: FALSE    Type: ENUM    Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14.3. Aerosol Species Is Required: FALSE    Type: ENUM    Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14.4. Number Of Steady State Species Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14.5. Sedimentation Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14.6. Coagulation Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.2. Gas Phase Species Is Required: FALSE    Type: STRING    Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.3. Aerosol Species Is Required: FALSE    Type: ENUM    Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.4. Number Of Steady State Species Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.5. Interactive Dry Deposition Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.6. Coagulation Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview atmospheric photo chemistry
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
16.2. Number Of Reactions Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 The number of reactions in the photo-chemistry scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
17. Photo Chemistry --> Photolysis Photolysis scheme 17.1. Method Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Photolysis scheme
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
17.2. Environmental Conditions Is Required: FALSE    Type: STRING    Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
Load the component using KFP SDK
import kfp.components as comp dataflow_template_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_template/component.yaml') help(dataflow_template_op)
components/gcp/dataflow/launch_template/sample.ipynb
kubeflow/pipelines
apache-2.0
Sample Note: The following sample code works in an IPython notebook or directly in Python code. In this sample, we run a Google-provided word count template from gs://dataflow-templates/latest/Word_Count. The template takes a text file as input and outputs word counts to a Cloud Storage bucket. Here is the sample input:
!gsutil cat gs://dataflow-samples/shakespeare/kinglear.txt
components/gcp/dataflow/launch_template/sample.ipynb
kubeflow/pipelines
apache-2.0
Set sample parameters
# Required Parameters PROJECT_ID = '<Please put your project ID here>' GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash # Optional Parameters EXPERIMENT_NAME = 'Dataflow - Launch Template' OUTPUT_PATH = '{}/out/wc'.format(GCS_WORKING_DIR)
components/gcp/dataflow/launch_template/sample.ipynb
kubeflow/pipelines
apache-2.0
Example pipeline that uses the component
import kfp.dsl as dsl import json @dsl.pipeline( name='Dataflow launch template pipeline', description='Dataflow launch template pipeline' ) def pipeline( project_id = PROJECT_ID, gcs_path = 'gs://dataflow-templates/latest/Word_Count', launch_parameters = json.dumps({ 'parameters': { 'inputFile': 'gs://dataflow-samples/shakespeare/kinglear.txt', 'output': OUTPUT_PATH } }), location = '', validate_only = 'False', staging_dir = GCS_WORKING_DIR, wait_interval = 30): dataflow_template_op( project_id = project_id, gcs_path = gcs_path, launch_parameters = launch_parameters, location = location, validate_only = validate_only, staging_dir = staging_dir, wait_interval = wait_interval)
components/gcp/dataflow/launch_template/sample.ipynb
kubeflow/pipelines
apache-2.0
Compile the pipeline
pipeline_func = pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename)
components/gcp/dataflow/launch_template/sample.ipynb
kubeflow/pipelines
apache-2.0
Submit the pipeline for execution
#Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
components/gcp/dataflow/launch_template/sample.ipynb
kubeflow/pipelines
apache-2.0
Inspect the output
!gsutil cat $OUTPUT_PATH*
components/gcp/dataflow/launch_template/sample.ipynb
kubeflow/pipelines
apache-2.0
Plot Original Data Set Plot the Sepal width vs. Sepal length on the original data.
# Plot the first two features BEFORE doing the PCA plt.figure(2, figsize=(8, 6)) plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) plt.xlabel('Sepal length (cm)') plt.ylabel('Sepal width (cm)') plt.show()
app/.ipynb_checkpoints/my_notebook-checkpoint.ipynb
eric-svds/flask-with-docker
gpl-2.0
Plot Data After PCA After performing a PCA, the first two components are plotted. Note that the two components plotted are linear combinations of the original 4 features of the data set.
# Plot the first two principal components AFTER the PCA plt.figure(2, figsize=(8, 6)) plt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=Y, cmap=plt.cm.Paired) plt.xlabel('Component 1') plt.ylabel('Component 2') plt.show()
app/.ipynb_checkpoints/my_notebook-checkpoint.ipynb
eric-svds/flask-with-docker
gpl-2.0
Save Output The Flask application will make use of the following D3 Scatterplot example. Data has to be in a particular format (see link for example), this cell flips the data sets into that format and pickles the output.
# Pickle pre- and post-PCA data import pickle features = [] for full_label in iris.feature_names: name = full_label[:-5].split() # remove trailing ' (cm)' features.append(name[0]+name[1].capitalize()) features.append("species") # Create full set for Iris data data1 = [] data_PCA = [] for i, vals in enumerate(X): row1 = dict() row_PCA = dict() for k, val in enumerate(np.append(X[i], iris.target_names[Y[i]])): row1[features[k]] = val data1.append(row1) for k, val in enumerate(np.append(X_PCA[i], iris.target_names[Y[i]])): row_PCA[features[k]] = val data_PCA.append(row_PCA) pickle.dump(data1, open("pkl/data1.pkl", "wb")) pickle.dump(data_PCA, open("pkl/data_PCA.pkl", "wb")) ttt = data1[0].values()[3] print ttt type(ttt)
app/.ipynb_checkpoints/my_notebook-checkpoint.ipynb
eric-svds/flask-with-docker
gpl-2.0
The line below creates a list of three pairs, each pair containing two pandas.Series objects. A Series is like a dictionary, only its items are ordered and its values must share a data type. The order keys of the series are its index. It is easy to compose Series objects into a DataFrame.
series = [ordered_words(archive.data) for archive in archives]
examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb
sbenthall/bigbang
agpl-3.0
This creates a DataFrame from each of the series. The columns alternate between representing word rankings and representing word counts.
rankings = pd.concat([series[0][0], series[0][1], series[1][0], series[1][1], series[2][0], series[2][1]],axis=1) # display the first 5 rows of the DataFrame rankings[:5]
examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb
sbenthall/bigbang
agpl-3.0
We should rename the columns to be more descriptive of the data.
rankings.rename(columns={0: 'ipc-gnso rankings', 1: 'ipc-gnso counts', 2: 'wp4 rankings', 3: 'wp4 counts', 4: 'ncuc-discuss rankings', 5: 'ncuc-discuss counts'},inplace=True) rankings[:5]
examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb
sbenthall/bigbang
agpl-3.0
Use the to_csv() function on the DataFrame object to export the data to CSV format, which you can open easily in Excel.
rankings.to_csv("rankings_all.csv",encoding="utf-8")
examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb
sbenthall/bigbang
agpl-3.0
To filter the data by certain authors before computing the word rankings, provide a list of author names as an argument. Only emails whose From header includes on of the author names within it will be included in the calculation. Note that for detecting the author name, the program for now uses simple string inclusion. You may need to try multiple variations of the authors' names in order to catch all emails written by persons of interest.
authors = ["Greg Shatan", "Niels ten Oever"] ordered_words(archives[0].data, authors=authors)
examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb
sbenthall/bigbang
agpl-3.0
Find the symmetries of the qubit operator
[symmetries, sq_paulis, cliffords, sq_list] = qubit_op.find_Z2_symmetries() print('Z2 symmetries found:') for symm in symmetries: print(symm.to_label()) print('single qubit operators found:') for sq in sq_paulis: print(sq.to_label()) print('cliffords found:') for clifford in cliffords: print(clifford.print_operators()) print('single-qubit list: {}'.format(sq_list))
community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Use the found symmetries, single qubit operators, and cliffords to taper qubits from the original qubit operator. For each Z2 symmetry one can taper one qubit. However, different tapered operators can be built, corresponding to different symmetry sectors.
tapered_ops = [] for coeff in itertools.product([1, -1], repeat=len(sq_list)): tapered_op = Operator.qubit_tapering(qubit_op, cliffords, sq_list, list(coeff)) tapered_ops.append((list(coeff), tapered_op)) print("Number of qubits of tapered qubit operator: {}".format(tapered_op.num_qubits))
community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
The user has to specify the symmetry sector he is interested in. Since we are interested in finding the ground state here, let us get the original ground state energy as a reference.
ee = get_algorithm_instance('ExactEigensolver') ee.init_args(qubit_op, k=1) result = core.process_algorithm_result(ee.run()) for line in result[0]: print(line)
community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Now, let us iterate through all tapered qubit operators to find out the one whose ground state energy matches the original (un-tapered) one.
smallest_eig_value = 99999999999999 smallest_idx = -1 for idx in range(len(tapered_ops)): ee.init_args(tapered_ops[idx][1], k=1) curr_value = ee.run()['energy'] if curr_value < smallest_eig_value: smallest_eig_value = curr_value smallest_idx = idx print("Lowest eigenvalue of the {}-th tapered operator (computed part) is {:.12f}".format(idx, curr_value)) the_tapered_op = tapered_ops[smallest_idx][1] the_coeff = tapered_ops[smallest_idx][0] print("The {}-th tapered operator matches original ground state energy, with corresponding symmetry sector of {}".format(smallest_idx, the_coeff))
community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Alternatively, one can run multiple VQE instances to find the lowest eigenvalue sector. Here we just validate that the_tapered_op reach the smallest eigenvalue in one VQE execution with the UCCSD variational form, modified to take into account of the tapered symmetries.
# setup initial state init_state = get_initial_state_instance('HartreeFock') init_state.init_args(num_qubits=the_tapered_op.num_qubits, num_orbitals=core._molecule_info['num_orbitals'], qubit_mapping=core._qubit_mapping, two_qubit_reduction=core._two_qubit_reduction, num_particles=core._molecule_info['num_particles'], sq_list=sq_list) # setup variationl form var_form = get_variational_form_instance('UCCSD') var_form.init_args(num_qubits=the_tapered_op.num_qubits, depth=1, num_orbitals=core._molecule_info['num_orbitals'], num_particles=core._molecule_info['num_particles'], active_occupied=None, active_unoccupied=None, initial_state=init_state, qubit_mapping=core._qubit_mapping, two_qubit_reduction=core._two_qubit_reduction, num_time_slices=1, cliffords=cliffords, sq_list=sq_list, tapering_values=the_coeff, symmetries=symmetries) # setup optimizer optimizer = get_optimizer_instance('COBYLA') optimizer.init_args() optimizer.set_options(maxiter=1000) # set vqe algo = get_algorithm_instance('VQE') algo.setup_quantum_backend(backend='statevector_simulator') algo.init_args(the_tapered_op, 'matrix', var_form, optimizer) algo_result = algo.run() result = core.process_algorithm_result(algo_result) for line in result[0]: print(line) print("The parameters for UCCSD are:\n{}".format(algo_result['opt_params']))
community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Naive concept of simultaneous deformation Here we try to split simple shear and pure shear to several incremental steps and mutually superposed those increments to simulate simultaneous deformation. We will use following deformation gradients for total simple shear and pure shear:
gamma = 1 Sx = 2 Fs = array([[1, gamma], [0, 1]]) Fp = array([[Sx, 0], [0, 1/Sx]])
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
To divide simple shear deformation with $\gamma$=1 to n incremental steps
n = 10 Fsi = array([[1, gamma/n], [0, 1]]) print('Incremental deformation gradient:') print(Fsi)
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
To check that supperposition of those increments give as total deformation, we can use allclose numpy function
array_equal(matrix_power(Fsi, n), Fs) Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]]) print('Incremental deformation gradient:') print(Fpi) allclose(matrix_power(Fpi, n), Fp)
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Knowing that deformation superposition is not cimmutative, we can check that axial ratio of finite strain resulting from simple shear superposed on pure shear and vice-versa is really different:
u,s,v = svd(Fs @ Fp) print('Axial ratio of finite strain resulting from simple shear superposed on pure shear: {}'.format(s[0]/s[1])) u,s,v = svd(Fp @ Fs) print('Axial ratio of finite strain resulting from pure shear superposed on simple shear: {}'.format(s[0]/s[1]))
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Lets try to split those deformation to two increments and mutually mix them:
Fsi = array([[1, gamma/2], [0, 1]]) Fpi = array([[Sx**(1/2), 0], [0, Sx**(-1/2)]]) u,s,v = svd(Fsi @ Fpi @ Fsi @ Fpi) print('Axial ratio of finite strain of superposed increments starting with pure shear: {}'.format(s[0]/s[1])) u,s,v = svd(Fpi @ Fsi @ Fpi @ Fsi) print('Axial ratio of finite strain of superposed increments starting with simple shear: {}'.format(s[0]/s[1]))
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
It is now close to each other, but still quite different. So let's split it to much more increments....
n = 100 Fsi = array([[1, gamma/n], [0, 1]]) Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]]) u,s,v = svd(matrix_power(Fsi @ Fpi, n)) print('Axial ratio of finite strain of superposed increments starting with pure shear: {}'.format(s[0]/s[1])) u,s,v = svd(matrix_power(Fpi @ Fsi, n)) print('Axial ratio of finite strain of superposed increments starting with simple shear: {}'.format(s[0]/s[1]))
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Now it is very close. Let's visualize how finite strain converge with increasing number of increments:
arp = [] ars = [] ninc = range(1, 201) for n in ninc: Fsi = array([[1, gamma/n], [0, 1]]) Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]]) u,s,v = svd(matrix_power(Fsi @ Fpi, n)) arp.append(s[0]/s[1]) u,s,v = svd(matrix_power(Fpi @ Fsi, n)) ars.append(s[0]/s[1]) figure(figsize=(16, 4)) semilogy(ninc, arp, 'r', label='Pure shear first') semilogy(ninc, ars, 'g', label='Simple shear first') legend() xlim(1, 200) xlabel('Number of increments') ylabel('Finite strain axial ratio');
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Using spatial velocity gradient We need to import matrix exponential and matrix logarithm functions from scipy.linalg
from scipy.linalg import expm, logm
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Spatial velocity gradient could be obtained as matrix logarithm of deformation gradient
Lp = logm(Fp) Ls = logm(Fs)
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Total spatial velocity gradient of simulatanous deformation could be calculated by summation of individual ones
L = Lp + Ls
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Resulting deformation gradient could be calculated as matrix exponential of total spatial velocity gradient
F = expm(L) u,s,v = svd(F) sar = s[0]/s[1] print('Axial| ratio of finite strain of simultaneous pure shear and simple shear: {}'.format(sar))
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Lets overlay it on previous diagram
arp = [] ars = [] ninc = range(1, 201) for n in ninc: Fsi = array([[1, gamma/n], [0, 1]]) Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]]) u,s,v = svd(matrix_power(Fsi @ Fpi, n)) arp.append(s[0]/s[1]) u,s,v = svd(matrix_power(Fpi @ Fsi, n)) ars.append(s[0]/s[1]) figure(figsize=(16, 4)) semilogy(ninc, arp, 'r', label='Pure shear first') semilogy(ninc, ars, 'g', label='Simple shear first') legend() xlim(1, 200) axhline(sar) xlabel('Number of increments') ylabel('Finite strain axial ratio');
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Decomposition of spatial velocity gradient Here we will decompose spatial velocity gradient of simple shear to rate of deformation tensor and spin tensor.
L = logm(Fs) D = (L + L.T)/2 W = (L - L.T)/2
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Check that decomposition give total spatial velocity gradient
allclose(D + W, L)
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Visualize spatial velocity gradients for rate of deformation tensor
vel_field(D)
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
Visualize spatial velocity gradients for spin tensor
vel_field(W)
14_Simultaneous_deformation.ipynb
ondrolexa/sg2
mit
A simple plot with the pyplot API
from bqplot import pyplot as plt plt.figure(1) n = 100 plt.plot(np.linspace(0.0, 10.0, n), np.cumsum(np.random.randn(n)), axes_options={'y': {'grid_lines': 'dashed'}}) plt.show()
2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Scatter Plot
plt.figure(title='Scatter Plot with colors') plt.scatter(y_data_2, y_data_3, color=y_data) plt.show()
2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Histogram
plt.figure() plt.hist(y_data, colors=['OrangeRed']) plt.show()
2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Every component of the figure is an independent widget
xs = bq.LinearScale() ys = bq.LinearScale() x = np.arange(100) y = np.cumsum(np.random.randn(2, 100), axis=1) #two random walks line = bq.Lines(x=x, y=y, scales={'x': xs, 'y': ys}, colors=['red', 'green']) xax = bq.Axis(scale=xs, label='x', grid_lines='solid') yax = bq.Axis(scale=ys, orientation='vertical', tick_format='0.2f', label='y', grid_lines='solid') fig = bq.Figure(marks=[line], axes=[xax, yax], animation_duration=1000) display(fig) # update data of the line mark line.y = np.cumsum(np.random.randn(2, 100), axis=1) xs = bq.LinearScale() ys = bq.LinearScale() x, y = np.random.rand(2, 20) scatt = bq.Scatter(x=x, y=y, scales={'x': xs, 'y': ys}, default_colors=['blue']) xax = bq.Axis(scale=xs, label='x', grid_lines='solid') yax = bq.Axis(scale=ys, orientation='vertical', tick_format='0.2f', label='y', grid_lines='solid') fig = bq.Figure(marks=[scatt], axes=[xax, yax], animation_duration=1000) display(fig) #data updates scatt.x = np.random.rand(20) * 10 scatt.y = np.random.rand(20)
2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb
QuantStack/quantstack-talks
bsd-3-clause
The same holds for the attributes of scales, axes
xs.min = 4 xs.min = None xax.label = 'Some label for the x axis'
2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Use bqplot figures as input widgets
xs = bq.LinearScale() ys = bq.LinearScale() x = np.arange(100) y = np.cumsum(np.random.randn(2, 100), axis=1) #two random walks line = bq.Lines(x=x, y=y, scales={'x': xs, 'y': ys}, colors=['red', 'green']) xax = bq.Axis(scale=xs, label='x', grid_lines='solid') yax = bq.Axis(scale=ys, orientation='vertical', tick_format='0.2f', label='y', grid_lines='solid')
2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Selections
def interval_change_callback(change): db.value = str(change['new']) intsel = bq.interacts.FastIntervalSelector(scale=xs, marks=[line]) intsel.observe(interval_change_callback, names=['selected'] ) db = widgets.Label() db.value = str(intsel.selected) display(db) fig = bq.Figure(marks=[line], axes=[xax, yax], animation_duration=1000, interaction=intsel) display(fig) line.selected
2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Handdraw
handdraw = bq.interacts.HandDraw(lines=line) fig.interaction = handdraw line.y[0]
2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Moving points around
from bqplot import * size = 100 np.random.seed(0) x_data = range(size) y_data = np.cumsum(np.random.randn(size) * 100.0) ## Enabling moving of points in scatter. Try to click and drag any of the points in the scatter and ## notice the line representing the mean of the data update sc_x = LinearScale() sc_y = LinearScale() scat = Scatter(x=x_data[:10], y=y_data[:10], scales={'x': sc_x, 'y': sc_y}, default_colors=['blue'], enable_move=True) lin = Lines(scales={'x': sc_x, 'y': sc_y}, stroke_width=4, line_style='dashed', colors=['orange']) m = Label(value='Mean is %s'%np.mean(scat.y)) def update_line(change): with lin.hold_sync(): lin.x = [np.min(scat.x), np.max(scat.x)] lin.y = [np.mean(scat.y), np.mean(scat.y)] m.value='Mean is %s'%np.mean(scat.y) update_line(None) # update line on change of x or y of scatter scat.observe(update_line, names='x') scat.observe(update_line, names='y') ax_x = Axis(scale=sc_x) ax_y = Axis(scale=sc_y, tick_format='0.2f', orientation='vertical') fig = Figure(marks=[scat, lin], axes=[ax_x, ax_y]) ## In this case on drag, the line updates as you move the points. with scat.hold_sync(): scat.enable_move = True scat.update_on_move = True scat.enable_add = False display(m, fig)
2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Load config from default location.
config.load_kube_config()
examples/notebooks/intro_notebook.ipynb
kubernetes-client/python
apache-2.0
Create API endpoint instance as well as API resource instances (body and specification).
api_instance = client.AppsV1Api() dep = client.V1Deployment() spec = client.V1DeploymentSpec()
examples/notebooks/intro_notebook.ipynb
kubernetes-client/python
apache-2.0
Fill required object fields (apiVersion, kind, metadata and spec).
name = "my-busybox" dep.metadata = client.V1ObjectMeta(name=name) spec.template = client.V1PodTemplateSpec() spec.template.metadata = client.V1ObjectMeta(name="busybox") spec.template.metadata.labels = {"app":"busybox"} spec.template.spec = client.V1PodSpec() dep.spec = spec container = client.V1Container() container.image = "busybox:1.26.1" container.args = ["sleep", "3600"] container.name = name spec.template.spec.containers = [container]
examples/notebooks/intro_notebook.ipynb
kubernetes-client/python
apache-2.0
Create Deployment using create_xxxx command for Deployments.
api_instance.create_namespaced_deployment(namespace="default",body=dep)
examples/notebooks/intro_notebook.ipynb
kubernetes-client/python
apache-2.0
Use list_xxxx command for Deployment, to list Deployments.
deps = api_instance.list_namespaced_deployment(namespace="default") for item in deps.items: print("%s %s" % (item.metadata.namespace, item.metadata.name))
examples/notebooks/intro_notebook.ipynb
kubernetes-client/python
apache-2.0
Use read_xxxx command for Deployment, to display the detailed state of the created Deployment resource.
api_instance.read_namespaced_deployment(namespace="default",name=name)
examples/notebooks/intro_notebook.ipynb
kubernetes-client/python
apache-2.0
Use patch_xxxx command for Deployment, to make specific update to the Deployment.
dep.metadata.labels = {"key": "value"} api_instance.patch_namespaced_deployment(name=name, namespace="default", body=dep)
examples/notebooks/intro_notebook.ipynb
kubernetes-client/python
apache-2.0
Use replace_xxxx command for Deployment, to update Deployment with a completely new version of the object.
dep.spec.template.spec.containers[0].image = "busybox:1.26.2" api_instance.replace_namespaced_deployment(name=name, namespace="default", body=dep)
examples/notebooks/intro_notebook.ipynb
kubernetes-client/python
apache-2.0
Use delete_xxxx command for Deployment, to delete created Deployment.
api_instance.delete_namespaced_deployment(name=name, namespace="default", body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5))
examples/notebooks/intro_notebook.ipynb
kubernetes-client/python
apache-2.0
Create Data
# Create a list of 20 observations drawn from a random distribution # with mean 1 and a standard deviation of 1.5 x = np.random.normal(1, 1.5, 20) # Create a list of 20 observations drawn from a random distribution # with mean 0 and a standard deviation of 1.5 y = np.random.normal(0, 1.5, 20)
statistics/t-tests.ipynb
tpin3694/tpin3694.github.io
mit
One Sample Two-Sided T-Test Imagine the one sample T-test and drawing a (normally shaped) hill centered at 1 and "spread" out with a standard deviation of 1.5, then placing a flag at 0 and looking at where on the hill the flag is location. Is it near the top? Far away from the hill? If the flag is near the very bottom of the hill or farther, then the t-test p-score will be below 0.05.
# Run a t-test to test if the mean of x is statistically significantly different than 0 pvalue = stats.ttest_1samp(x, 0)[1] # View the p-value pvalue
statistics/t-tests.ipynb
tpin3694/tpin3694.github.io
mit
Two Variable Unpaired Two-Sided T-Test With Equal Variances Imagine the one sample T-test and drawing two (normally shaped) hills centered at their means and their 'flattness' (individual spread) based on the standard deviation. The T-test looks at how much the two hills are overlapping. Are they basically on top of each other? Do just the bottoms of the hill just barely touch? If the tails of the hills are just barely overlapping or are not overlapping at all, the t-test p-score will be below 0.05.
stats.ttest_ind(x, y)[1]
statistics/t-tests.ipynb
tpin3694/tpin3694.github.io
mit
Two Variable Unpaired Two-Sided T-Test With Unequal Variances
stats.ttest_ind(x, y, equal_var=False)[1]
statistics/t-tests.ipynb
tpin3694/tpin3694.github.io
mit
Two Variable Paired Two-Sided T-Test Paired T-tests are used when we are taking repeated samples and want to take into account the fact that the two distributions we are testing are paired.
stats.ttest_rel(x, y)[1]
statistics/t-tests.ipynb
tpin3694/tpin3694.github.io
mit
Outline The outline of the idea is: Find the red lines that represent parallel synchronization signal above Calculate their size "Synchromize with rows below" (according to the rules of the code) ... PROFIT! !!! Things to keep in mind: deviations of red deviations of black noise - it might just break everything! beginning and end of image... ... A rather simple PNG we'll work with first:
# Let us first define colour red # We'll work with RGB for colours # So for accepted variants we'll make a list of 3-lists. class colourlist(list): """Just lists of 3-lists with some fancy methods to work with RGB colours """ def add_deviations(self, d=8): # Magical numbers are so magical! """Adds deviations for RGB colours to a given list. Warning! Too huge - it takes forever. Input: list of 3-lists Output: None (side-effects - changes the list) """ #l = l[:] Nah, let's make it a method l = self v = len(l) max_deviation = d for i in range(v): # Iterate through the list of colours for j in range(-max_deviation, max_deviation+1): # Actually it is the deviation. #for k in range(3): # RGB! (no "a"s here) newcolour = self[i][:] # Take one of the original colours newcolour[0] = abs(newcolour[0]+j) # Create a deviation l.append(newcolour) # Append new colour to the end of the list. # <- Here it is changed! for j in range(-max_deviation, max_deviation+1): # Work with all the possibilities with this d newcolour1 = newcolour[:] newcolour1[1] = abs(newcolour1[1]+j) l.append(newcolour1) # Append new colour to the end of the list. Yeah! # <- Here it is changed! for j in range(-max_deviation, max_deviation+1): # Work with all the possibilities with this d newcolour2 = newcolour1[:] newcolour2[2] = abs(newcolour2[2]+j) l.append(newcolour2) # Append new colour to the end of the list. Yeah! # <- Here it is changed! return None def withinDeviation(colour, cl, d=20): """This is much more efficient! Input: 3-list (colour), colourlist, int Output: bool """ for el in cl: if (abs(colour[0] - el[0]) <= d and abs(colour[1] - el[1]) <= d and abs(colour[2] - el[2]) <= d): return True return False accepted_colours = colourlist([[118, 58, 57], [97, 71, 36], [132, 56, 46], [132, 46, 47], [141, 51, 53]]) # ... #accepted_colours.add_deviations() # print(accepted_colours) # -check it! - or better don't - it is a biiiig list.... # print(len(accepted_colours)) # That will take a while... Heh.. def find_first_pixel_of_colour(pixellist, accepted_deviations): """Returns the row and column of the pixel in a converted to list with RGB colours PNG Input: ..., colourlist Output: 2-tuple of int (or None) """ accepted_deviations = accepted_deviations[:] rows = len(pixellist) cols = len(pixellist[0]) for j in range(rows): for i in range(0, cols, 3): # if [pixellist[j][i], pixellist[j][i+1], pixellist[j][i+2]] in accepted_deviations: if withinDeviation([pixellist[j][i], pixellist[j][i+1], pixellist[j][i+2]], accepted_deviations): return (j, i) return None fr = find_first_pixel_of_colour(img, accepted_colours) if fr is None: print("Warning a corrupt file or a wrong format!!!") print(fr) print(img[fr[0]][fr[1]], img[fr[0]][fr[1]+1], img[fr[0]][fr[1]+2]) print(img[fr[0]]) # [133, 56, 46] in accepted_colours # Let us now find the length of the red lines that represent the sync signal def find_next_pixel_in_row(pixel, row, accepted_deviations): """Returns the column of the next pixel of a given colour (with deviations) in a row from a converted to list with RGB colours PNG Input: 2-tuple of int, list of int with len%3==0, colourlist Output: int (returns -1 specifically if none are found) """ l = len(row) if pixel[1] >= l-1: return -1 for i in range(pixel[1]+3, l, 3): # if [row[i], row[i+1], row[i+2]] in accepted_deviations: if withinDeviation([row[i], row[i+1], row[i+2]], accepted_deviations): return i return -1 def colour_line_length(pixels, start, colour, deviations=20): line_length = 1 pr = start[:] r = (pr[0], find_next_pixel_in_row(pr, pixels[pr[0]], colour[:])) # print(pr, r) if not(r[1] == pr[1]+3): print("Ooops! Something went wrong!") else: line_length += 1 while (r[1] == pr[1]+3): pr = r r = (pr[0], find_next_pixel_in_row(pr, pixels[pr[0]], colour[:])) line_length += 1 return line_length line_length = colour_line_length(img, fr, accepted_colours, deviations=20) print(line_length) # !!!
Decoder.ipynb
fedor1113/LineCodes
mit
We found the sync (clock) line length in our graph!
print("It is", line_length)
Decoder.ipynb
fedor1113/LineCodes
mit
Now the information transfer signal itself is ~"black", so we need to find the black colour range as well!
# Let's do just that black = colourlist([[0, 0, 0], [0, 1, 0], [7, 2, 8]]) # black.add_deviations(60) # experimentally it is somewhere around that # experimentally the max deviation is somewhere around 60 print(black)
Decoder.ipynb
fedor1113/LineCodes
mit
The signal we are currently interested in is Manchester code (as per G.E. Thomas). It is a self-clocking signal, but since we do have a clock with it - we use it) Let us find the height of the Manchester signal in our PNG - just because...
fb = find_first_pixel_of_colour(img, black) def signal_height(pxls, fib): signal_height = 1 # if ([img[fb[0]+1][fb[1]], img[fb[0]+1][fb[1]+1], img[fb[0]+1][fb[1]+2]] in black): if withinDeviation([pxls[fib[0]+1][fib[1]], pxls[fib[0]+1][fib[1]+1] , pxls[fib[0]+1][fib[1]+2]], black, 60): signal_height += 1 i = 2 rows = len(pxls) # while([img[fb[0]+i][fb[1]], img[fb[0]+i][fb[1]+1], img[fb[0]+i][fb[1]+2]] in black): while(withinDeviation([pxls[fib[0]+i][fib[1]] , pxls[fib[0]+i][fib[1]+1] , pxls[fib[0]+i][fib[1]+2]], black, 60)): signal_height += 1 i += 1 if (i >= rows): break else: print("") # TO DO return signal_height sheight = signal_height(img, fb)-1 print(sheight) # Let's quickly find the last red line ... def manchester(pixels, start, clock, line_colour, d=60, inv=False): """Decodes Manchester code (as per G. E. Thomas) (or with inv=True Manchester code (as per IEEE 802.4)). Input: array of int with len%3==0 (- PNG pixels), int, int, colourlist, int, bool (optional) Output: str (of '1' and '0') or None """ res = "" cols = len(pixels[0]) fb = find_first_pixel_of_colour(pixels, line_colour) m = 2*clock*3-2*3 # Here be dragons! # Hack: only check it using the upper line # (or lack thereof) if not(inv): for i in range(start, cols-2*3, m): fromUP = withinDeviation([pixels[fb[0]][i-6], pixels[fb[0]][i-5], pixels[fb[0]][i-4]], line_colour, d) if fromUP: res = res + "1" else: res = res + "0" else: for i in range(start, cols-2*3, m): fromUP = withinDeviation([pixels[fb[0]][i-6], pixels[fb[0]][i-5], pixels[fb[0]][i-4]], line_colour, d) if cond: res = res + "0" else: res = res + "1" return res def nrz(pixels, start, clock, line_colour, d=60, inv=False): """Decodes NRZ code (or with inv=True its inversed version). It is assumed that there is indeed a valid NRZ code with a valid message. Input: array of int with len%3==0 (- PNG pixels), int, int, colourlist, int, bool (optional) Output: str (of '1' and '0') or (maybe?) None """ res = "" cols = len(pixels[0]) fb = find_first_pixel_of_colour(pixels, line_colour) m = 2*clock*3-2*3 # Here be dragons! # Hack: only check it using the upper line # (or lack thereof) if not(inv): for i in range(start, cols, m): UP = withinDeviation([pixels[fb[0]][i], pixels[fb[0]][i+1], pixels[fb[0]][i+2]], line_colour, d) if UP: res = res + "1" else: res = res + "0" else: for i in range(start, cols-2*3, m): UP = withinDeviation([pixels[fb[0]][i], pixels[fb[0]][i+1], pixels[fb[0]][i+2]], line_colour, d) if cond: res = res + "0" else: res = res + "1" return res def code2B1Q(pixels, start, clock=None, line_colour=[[0, 0, 0]], d=60, inv=False): """Decodes 2B1Q code. The clock is not used - it is for compatibility only - really, so put anything there. Does _NOT_ always work! WARNING! Right now does not work AT ALL (apart from one specific case) Input: array of int with len%3==0 (- PNG pixels), int, *, colourlist, int Output: str (of '1' and '0') or None """ res = "" cols = len(pixels[0]) fb = find_first_pixel_of_colour(pixels, line_colour) # (11, 33) # will only work if the first or second dibit is 0b11 ll = colour_line_length(pixels, fb, line_colour, deviations=20) # 10 sh = signal_height(pixels, fb) - 1 # 17 -1? m = ll*3-2*3 # will only work if there is a transition # (after the first dibit) # We only need to check if the line is # on the upper, middle upper or middle lower rows... for i in range(start, cols, m): UP = withinDeviation([pixels[fb[0]][i], pixels[fb[0]][i+1], pixels[fb[0]][i+2]], line_colour, d) DOWN = withinDeviation([pixels[fb[0]+sh][i], pixels[fb[0]+sh][i+1], pixels[fb[0]+sh][i+2]], line_colour, d) almostUP = UP # if UP: # res = res + "10" if DOWN: # elif DOWN: res = res + "00" # print("00") elif almostUP: res = res + "11" # print("11") else: res = res + "01" # print("01") return res # A-a-and... here is magic! res = manchester(img, fr[1]+5*3, line_length, black, d=60, inv=False) ans = [] for i in range(0, len(res), 8): ans.append(int('0b'+res[i:i+8], 2)) # print(ans) for i in range(0, len(ans)): print(ans[i])
Decoder.ipynb
fedor1113/LineCodes
mit
Huzzah! And that is how we decode it. Let us now look at some specific examples.
# Here is a helper function to automate all that def parse_code(path_to_file, code, inv=False): """Guess what... Parses a line code PNG Input: str, function (~coinsides with the name of the code) Output: str (of '1' and '0') or (maybe?) None """ r1 = png.Reader(path_to_file) t1 = r1.asRGB() img1 = list(t1[2]) fr1 = find_first_pixel_of_colour(img1, accepted_colours) line_length1 = colour_line_length(img1, fr1, accepted_colours, deviations=20) res1 = code(img1, fr1[1]+5*3, line_length1, black, d=60, inv=inv) return res1 def print_nums(bitesstr): """I hope you get the gist... Input: str Output: list (side effects - prints...) """ ans1 = [] for i in range(0, len(bitesstr), 8): ans1.append(int('0b'+bitesstr[i:i+8], 2)) for i in range(0, len(ans1)): print(ans1[i]) return ans1
Decoder.ipynb
fedor1113/LineCodes
mit
Manchester Code (a rather tricky example) Here is a tricky example of Manchester code - where we have ASCII '0's and '1's with which a 3-letter "word" is encoded.
ans1 = print_nums(parse_code("Line_Code_PNGs/Manchester.png", manchester)) res2d = "" for i in range(0, len(ans1)): res2d += chr(ans1[i]) ans2d = [] for i in range(0, len(res2d), 8): print(int('0b'+res2d[i:i+8], 2))
Decoder.ipynb
fedor1113/LineCodes
mit
NRZ
ans2 = print_nums(parse_code("Line_Code_PNGs/NRZ.png", nrz))
Decoder.ipynb
fedor1113/LineCodes
mit
2B1Q Warning! 2B1Q is currently almost completely broken. Pull requests with correct solutions are welcome :)
ans3 = print_nums(parse_code("Line_Code_PNGs/2B1Q.png", code2B1Q)) res2d3 = "" for i in range(0, len(ans3)): res2d3 += chr(ans3[i]) ans2d3 = [] for i in range(0, len(res2d3), 8): print(int('0b'+res2d3[i:i+8], 2))
Decoder.ipynb
fedor1113/LineCodes
mit
Processing a single file We will start with processing one of the downloaded files (BETR8010000800100hour.1-1-1990.31-12-2012). Looking at the data, you will see it does not look like a nice csv file:
with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f: print(f.readline())
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
So we will need to do some manual processing. Just reading the tab-delimited data:
data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head()
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. <div class="alert alert-success"> <b>EXERCISE 1</b>: <br><br> Clean up this dataframe by using more options of `pd.read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)) <ul> <li>specify the correct delimiter</li> <li>specify that the values of -999 and -9999 should be regarded as NaN</li> <li>specify our own column names (for how the column names are made up, see <a href="http://stackoverflow.com/questions/6356041/python-intertwining-two-lists">http://stackoverflow.com/questions/6356041/python-intertwining-two-lists</a>) </ul> </div>
# Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag' hours = ["{:02d}".format(i) for i in range(24)] column_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair] # %load _solutions/case4_air_quality_processing1.py # %load _solutions/case4_air_quality_processing2.py
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). <div class="alert alert-success"> **EXERCISE 2**: Drop all 'flag' columns ('flag1', 'flag2', ...) </div>
flag_columns = [col for col in data.columns if 'flag' in col] # we can now use this list to drop these columns # %load _solutions/case4_air_quality_processing3.py data.head()
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries. <div class="alert alert-info"> <b>REMEMBER</b>: Recap: reshaping your data with [`stack` / `melt` and `unstack` / `pivot`](./pandas_08_reshaping_data.ipynb)</li> <img src="../img/pandas/schema-stack.svg" width=70%> </div> <div class="alert alert-success"> <b>EXERCISE 3</b>: <br><br> Reshape the dataframe to a timeseries. The end result should look like:<br><br> <div class='center'> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>BETR801</th> </tr> </thead> <tbody> <tr> <th>1990-01-02 09:00:00</th> <td>48.0</td> </tr> <tr> <th>1990-01-02 12:00:00</th> <td>48.0</td> </tr> <tr> <th>1990-01-02 13:00:00</th> <td>50.0</td> </tr> <tr> <th>1990-01-02 14:00:00</th> <td>55.0</td> </tr> <tr> <th>...</th> <td>...</td> </tr> <tr> <th>2012-12-31 20:00:00</th> <td>16.5</td> </tr> <tr> <th>2012-12-31 21:00:00</th> <td>14.5</td> </tr> <tr> <th>2012-12-31 22:00:00</th> <td>16.5</td> </tr> <tr> <th>2012-12-31 23:00:00</th> <td>15.0</td> </tr> </tbody> </table> <p style="text-align:center">170794 rows × 1 columns</p> </div> <ul> <li>Reshape the dataframe so that each row consists of one observation for one date + hour combination</li> <li>When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns</li> <li>Set the new datetime values as the index, and remove the original columns with date and hour values</li> </ul> **NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. </div> Reshaping using melt:
# %load _solutions/case4_air_quality_processing4.py
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Reshaping using stack:
# %load _solutions/case4_air_quality_processing5.py # %load _solutions/case4_air_quality_processing6.py
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Combine date and hour:
# %load _solutions/case4_air_quality_processing7.py # %load _solutions/case4_air_quality_processing8.py # %load _solutions/case4_air_quality_processing9.py data_stacked.head()
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Our final data is now a time series. In pandas, this means that the index is a DatetimeIndex:
data_stacked.index data_stacked.plot()
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. <div class="alert alert-success"> <b>EXERCISE 4</b>: <ul> <li>Write a function <code>read_airbase_file(filename, station)</code>, using the above steps the read in and process the data, and that returns a processed timeseries.</li> </ul> </div>
def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ ... return ... # %load _solutions/case4_air_quality_processing10.py
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Test the function on the data file from above:
import os filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012" station = os.path.split(filename)[-1][:7] station test = read_airbase_file(filename, station) test.head()
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. <div class="alert alert-success"> **EXERCISE 5**: Use the [pathlib module](https://docs.python.org/3/library/pathlib.html) `Path` class in combination with the `glob` method to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`. <details><summary>Hints</summary> - The pathlib module provides a object oriented way to handle file paths. First, create a `Path` object of the data folder, `pathlib.Path("./data")`. Next, apply the `glob` function to extract all the files containing `*0008001*` (use wildcard * to say "any characters"). The output is a Python generator, which you can collect as a `list()`. </details> </div>
from pathlib import Path # %load _solutions/case4_air_quality_processing11.py
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause