markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Predicting Attacks for some sensors in the train data set 2 The Loop below will select the best parameters for the model.**It takes some minutes to run , please be advised**_The code will just select the best parameters based on the train set 2 - nothing else_ | #############train
#L_T7
L_T7=parameterselection(dftrain1["L_T7"],dftrain2_L_T7[" L_T7"],dftrain2_L_T7[' ATT_FLAG'])
#F_PU10
F_PU10=parameterselection(dftrain1["F_PU10"],dftrain2_F_PU10[" F_PU10"],dftrain2_F_PU10[' ATT_FLAG'])
#F_PU11
F_PU11=parameterselection(dftrain1["F_PU11"],dftrain2_F_PU11[" F_PU11"],dftrain2_F_PU11[' ATT_FLAG'])
#L_T1
L_T1=parameterselection(dftrain1["L_T1"],dftrain2_L_T1[" L_T1"],dftrain2_L_T1[' ATT_FLAG'])
#F_PU1
F_PU1=parameterselection(dftrain1["F_PU1"],dftrain2_F_PU1[" F_PU1"],dftrain2_F_PU1[' ATT_FLAG'])
#F_PU2
F_PU2=parameterselection(dftrain1["F_PU2"],dftrain2_F_PU2[" F_PU2"],dftrain2_F_PU2[' ATT_FLAG'])
#L_T4
L_T4=parameterselection(dftrain1["L_T4"],dftrain2_L_T4[" L_T4"],dftrain2_L_T4[' ATT_FLAG'])
print(L_T7[0][1]["F1"])
print(F_PU10[0][1]["F1"])
print(F_PU11[0][1]["F1"])
print(L_T1[0][1]["F1"])
print(F_PU1[0][1]["F1"])
print(F_PU2[0][1]["F1"])
print(L_T4[0][1]["F1"]) | 0.06684491978609626
0.014204545454545454
0.0
0.1286764705882353
0.08227848101265822
0.08849557522123894
0.036290322580645164
| MIT | batadal/Task3_Discrete.ipynb | imchengh/Cyber-Data-Analytics |
Predicting Attacks for some sensors in the test data According to the information provided about the BATADAL dataset, Anomalies can be found at 'L_T3', 'F_PU4', 'F_PU5', 'L_T2, 'V2', 'P_J14', 'P_J422, 'L_T7', 'F_PU10', 'F_PU11', 'L_T6'.The parameters for prediction will be those optimized by on the traindataset 2. When not available the parameters will be for one close variable | #'L_T3'
trainset=dftrain1['L_T3']
testset=dftest['L_T3']
sax=L_T1[1]["sax"]
paa=L_T1[1]["paa"]
n=L_T1[1]["ngram"]
t=L_T1[1]["threshold"]
results=dftest_L_T3[' ATT_FLAG']
MODEL_LT3=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'F_PU4'
trainset=dftrain1['F_PU4']
testset=dftest['F_PU4']
sax=F_PU10[1]["sax"]
paa=F_PU10[1]["paa"]
n=F_PU10[1]["ngram"]
t=F_PU10[1]["threshold"]
results=dftest_P_U4[' ATT_FLAG']
MODEL_F_PU4=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'F_PU5'
trainset=dftrain1['F_PU5']
testset=dftest['F_PU5']
sax=F_PU10[1]["sax"]
paa=F_PU10[1]["paa"]
n=F_PU10[1]["ngram"]
t=F_PU10[1]["threshold"]
results=dftest_P_U5[' ATT_FLAG']
MODEL_F_PU5=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'L_T2
trainset=dftrain1['L_T2']
testset=dftest['L_T2']
sax=L_T1[1]["sax"]
paa=L_T1[1]["paa"]
n=L_T1[1]["ngram"]
t=L_T1[1]["threshold"]
results=dftest_L_T2[' ATT_FLAG']
MODEL_L_T2=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'P_J14'
trainset=dftrain1['P_J14']
testset=dftest['P_J14']
sax=L_T7[1]["sax"]
paa=L_T7[1]["paa"]
n=L_T7[1]["ngram"]
t=L_T7[1]["threshold"]
results=dftest_P_J14[' ATT_FLAG']
MODEL_P_J14=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'P_J422
trainset=dftrain1['P_J422']
testset=dftest['P_J422']
sax=L_T7[1]["sax"]
paa=L_T7[1]["paa"]
n=L_T7[1]["ngram"]
t=L_T7[1]["threshold"]
results=dftest_P_J422[' ATT_FLAG']
MODEL_P_J422=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'L_T7'
trainset=dftrain1['L_T7']
testset=dftest['L_T7']
sax=L_T7[1]["sax"]
paa=L_T7[1]["paa"]
n=L_T7[1]["ngram"]
t=L_T7[1]["threshold"]
results=dftest_L_T7[' ATT_FLAG']
MODEL_L_T7=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'F_PU10'
trainset=dftrain1['F_PU10']
testset=dftest['F_PU10']
sax=F_PU10[1]["sax"]
paa=F_PU10[1]["paa"]
n=F_PU10[1]["ngram"]
t=F_PU10[1]["threshold"]
results=dftest_P_U10[' ATT_FLAG']
MODEL_F_PU10=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'F_PU11'
trainset=dftrain1['F_PU11']
testset=dftest['F_PU11']
sax=F_PU11[1]["sax"]
paa=F_PU11[1]["paa"]
n=F_PU11[1]["ngram"]
t=F_PU11[1]["threshold"]
results=dftest_P_U11[' ATT_FLAG']
MODEL_F_PU11=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'L_T6'
trainset=dftrain1['L_T6']
testset=dftest['L_T6']
sax=L_T1[1]["sax"]
paa=L_T1[1]["paa"]
n=L_T1[1]["ngram"]
t=L_T1[1]["threshold"]
results=dftest_L_T6[' ATT_FLAG']
MODEL_L_T6=model_to_predict(trainset,testset,results,sax,int(len(dftrain1)/paa),n,t)
#'L_T3'
ploting_cm(MODEL_LT3[0])
print(MODEL_LT3[1])
#'F_PU4'
ploting_cm(MODEL_F_PU4[0])
print(MODEL_F_PU4[1])
#'F_PU5'
ploting_cm(MODEL_F_PU5[0])
print(MODEL_F_PU5[1])
#'L_T2
ploting_cm(MODEL_L_T2[0])
print(MODEL_L_T2[1])
#'P_J14'
ploting_cm(MODEL_P_J14[0])
print(MODEL_P_J14[1])
#'P_J422
ploting_cm(MODEL_P_J422[0])
print(MODEL_P_J422[1])
#'L_T7'
ploting_cm(MODEL_L_T7[0])
print(MODEL_L_T7[1])
#'F_PU10'
ploting_cm(MODEL_F_PU10[0])
print(MODEL_F_PU10[1])
#'F_PU11'
ploting_cm(MODEL_F_PU11[0])
print(MODEL_F_PU11[1])
#'L_T6'
ploting_cm(MODEL_L_T6[0])
print(MODEL_L_T6[1]) | {'recall': 0.06666666666666667, 'precision': 0.011904761904761904, 'Accuracy': 0.9071325993298229, 'F1': 0.014245014245014244}
| MIT | batadal/Task3_Discrete.ipynb | imchengh/Cyber-Data-Analytics |
Aggregating all the predictions Aggregating all the predictions in one dataset | #Getting the predictions for anomalies
dftest_with_predictions=dftest
#Loop that aggregate all the predictions in a column with the test set with predictions
Model_set=[MODEL_LT3,MODEL_F_PU4,MODEL_F_PU5,MODEL_L_T2,MODEL_P_J14,MODEL_P_J422,MODEL_L_T7,MODEL_F_PU10,MODEL_F_PU11,MODEL_L_T6]
pred=[0]*len(model[3])
for model in Model_set:
for i in model[3].index:
if model[3]["predictions"][i]==1:
pred[i]=1
dftest_with_predictions["predictions"]=pred | _____no_output_____ | MIT | batadal/Task3_Discrete.ipynb | imchengh/Cyber-Data-Analytics |
Calculating the performance for the aggregate model | cm=confusion_matrix(list(dftest[" ATT_FLAG"]), pred)
#True positive, false positive, false negative and true positive
tn, fp, fn, tp = cm.ravel()
#Getting the ratios
recall=tp/(tp+fn)
precision=tp/(tp+fp)
#Ploting
ploting_cm(cm)
#Printing performance
print("Recall: "+str(recall))
print("Precision: "+str(precision)) | Recall: 0.8574938574938575
Precision: 0.20069005175388155
| MIT | batadal/Task3_Discrete.ipynb | imchengh/Cyber-Data-Analytics |
Checking the type of anomaly Calculating per each type of anomaly the recall rate | #Calculating number of anomalies per type
Total_Collective_Anomalies=len(dftest_with_predictions[dftest_with_predictions["anomalytype"]=="collective"])
Total_Contextual_Anomalies=len(dftest_with_predictions[dftest_with_predictions["anomalytype"]=="contextual"])
#Calculating number of anomalies predicted per type
Predicted_Collective_Anomalies=len(dftest_with_predictions[(dftest_with_predictions["anomalytype"]=="collective") & (dftest_with_predictions["predictions"]==1)])
Predicted_Contextual_Anomalies=len(dftest_with_predictions[(dftest_with_predictions["anomalytype"]=="contextual") & (dftest_with_predictions["predictions"]==1)])
#Ratio - percentage predicted
Percentage_Collective_Predicted=Predicted_Collective_Anomalies/Total_Collective_Anomalies
Percentage_Contextual_Predicted=Predicted_Contextual_Anomalies/Total_Contextual_Anomalies
print("Percentage of Collective Anomalies Predicted: "+str(Percentage_Collective_Predicted))
print("Percentage of Contextual Anomalies Predicted: "+str(Percentage_Contextual_Predicted))
| Percentage of Collective Anomalies Predicted: 0.8643067846607669
Percentage of Contextual Anomalies Predicted: 0.8285714285714286
| MIT | batadal/Task3_Discrete.ipynb | imchengh/Cyber-Data-Analytics |
Custom Expectation Value Program for the Qiskit RuntimePaul NationIBM Quantum Partners Technical Enablement TeamHere we will show how to make a program that takes a circuit, or list of circuits, and computes the expectation values of one or more diagonal operators. Prerequisites- You must have the latest Qiskit installed.- You must have either an IBM Cloud or an IBM Quantum account that can access Qiskit Runtime. BackgroundThe primary method by which information is obtained from quantum computers is via expectation values. Indeed, the samples that come from executing a quantum circuit multiple times, once converted to probabilities, can be viewed as just a finite sample approximation to the expectation value for the projection operators corresponding to each bitstring. More practically, many quantum algorithms require computing expectation values over Pauli operators, e.g. Variational Quantum Eigensolvers, and thus having a runtime program that computes these quantities is of fundamental importance. Here we look at one such example, where an user passes one or more circuits and expectation operators and gets back the computed expectation values, and possibly error bounds. Expectation value of a diagonal operatorConsider a generic observable given by the tensor product of diagonal operators over $N$ qubits $O = O_{N-1}\dots O_{0}$ where the subscript indicates the qubit on which the operator acts. Then for a set of observed $M$ bitstrings $\{b_{0}, \dots b_{M-1}\}$, where $M \leq 2^N $, with corresponding approximate probabilites $p_{m}$ the expectation value is given by$$\langle O\rangle \simeq \sum_{m=0}^{M-1} p_{m}\prod_{n=0}^{N-1}O_{n}[b_{m}[N-n-1], b_{m}[N-n-1]],$$where $O_{n}[b_{m}[N-n-1], b_{m}[N-n-1]]$ is the diagonal element of $O_{n}$ specified by the $N-n-1$th bit in bitstring $b_{m}$. The reason for the complicated indexing in `b_{m}` is because Qiskit uses least-sginificant bit indexing where the zeroth element of the bit-strings is given by the right-most bit.Here we will use built-in routines to compute these expectation values. However, it is not hard to do yourself, with plenty of examples to be found. Main programHere we define our main function for the expectation value runtime program. As always, our program must start with the `backend`, and `user_messenger` arguements, followed by the actual inputs we pass to the program. Here our options are quite simple:- `circuits`: A single QuantumCircuit or list of QuantumCircuits to be executed on the target backend.- `expectation_operators`: The operators we want to evaluate. These can be strings of diagonal Pauli's, eg, `ZIZZ`, or custom operators defined by dictionarys. For example, the projection operator on the all ones state of 4 qubits is `{'1111': 1}`.- `shots`: Howe many times to sample each circuit.- `transpiler_config`: A dictionary that passes additional arguments on to the transpile function, eg. `optimization_level`.- `run_config`: A dictionary that passes additional arguments on to `backend.run()`.- `skip_transpilation`: A flag to skip transpilation altogether and just run the circuits. This is useful for situations where you need to transpile parameterized circuits once, but must bind parameters multiple times and evaluate. - `return_stddev`: Flag to return bound on standard deviation. If using measurement mitigation this adds some overhead to the computation.- `use_measurement_mitigation`: Use M3 measurement mitigation and compute expecation value and standard deviation bound from quasi-probabilities.At the top of the cell below you will see a commented out `%%writefile sample_expval.py`. We will use this to convert the cell to a Python module named `sample_expval.py` to upload. | #%%writefile sample_expval.py
import mthree
from qiskit import transpile
# The entrypoint for our Runtime Program
def main(
backend,
user_messenger,
circuits,
expectation_operators="",
shots=8192,
transpiler_config={},
run_config={},
skip_transpilation=False,
return_stddev=False,
use_measurement_mitigation=False,
):
"""Compute expectation
values for a list of operators after
executing a list of circuits on the target backend.
Parameters:
backend (ProgramBackend): Qiskit backend instance.
user_messenger (UserMessenger): Used to communicate with the program user.
circuits: (QuantumCircuit or list): A single list of QuantumCircuits.
expectation_operators (str or dict or list): Expectation values to evaluate.
shots (int): Number of shots to take per circuit.
transpiler_config (dict): A collection of kwargs passed to transpile().
run_config (dict): A collection of kwargs passed to backend.run().
skip_transpilation (bool): Skip transpiling of circuits, default=False.
return_stddev (bool): Return upper bound on standard devitation,
default=False.
use_measurement_mitigation (bool): Improve resulting using measurement
error mitigation, default=False.
Returns:
array_like: Returns array of expectation values or a list of (expval, stddev)
tuples if return_stddev=True.
"""
# transpiling the circuits using given transpile options
if not skip_transpilation:
trans_circuits = transpile(circuits, backend=backend, **transpiler_config)
# Make sure everything is a list
if not isinstance(trans_circuits, list):
trans_circuits = [trans_circuits]
# If skipping set circuits -> trans_circuits
else:
if not isinstance(circuits, list):
trans_circuits = [circuits]
else:
trans_circuits = circuits
# If we are given a single circuit but requesting multiple expectation
# values, then set flag to make multiple pointers to same result.
duplicate_results = False
if isinstance(expectation_operators, list):
if len(expectation_operators) and len(trans_circuits) == 1:
duplicate_results = True
# If doing measurement mitigation we must build and calibrate a
# mitigator object. Will also determine which qubits need to be
# calibrated.
if use_measurement_mitigation:
# Get an the measurement mappings at end of circuits
meas_maps = mthree.utils.final_measurement_mapping(trans_circuits)
# Get an M3 mitigator
mit = mthree.M3Mitigation(backend)
# Calibrate over the set of qubits measured in the transpiled circuits.
mit.cals_from_system(meas_maps)
# Compute raw results
result = backend.run(trans_circuits, shots=shots, **run_config).result()
raw_counts = result.get_counts()
# When using measurement mitigation we need to apply the correction and then
# compute the expectation values from the computed quasi-probabilities.
if use_measurement_mitigation:
quasi_dists = mit.apply_correction(
raw_counts, meas_maps, return_mitigation_overhead=return_stddev
)
if duplicate_results:
quasi_dists = mthree.classes.QuasiCollection(
[quasi_dists] * len(expectation_operators)
)
# There are two different calls depending on what we want returned.
if return_stddev:
return quasi_dists.expval_and_stddev(expectation_operators)
return quasi_dists.expval(expectation_operators)
# If the program didn't return in the mitigation loop above it means
# we are processing the raw_counts data. We do so here using the
# mthree utilities
if duplicate_results:
raw_counts = [raw_counts] * len(expectation_operators)
if return_stddev:
return mthree.utils.expval_and_stddev(raw_counts, expectation_operators)
return mthree.utils.expval(raw_counts, expectation_operators) | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
Local testingHere we test with a local "Fake" backend that mimics the noise properties of a real system and a 4-qubit GHZ state. | from qiskit import QuantumCircuit
from qiskit.test.mock import FakeSantiago
from qiskit_ibm_runtime import UserMessenger
msg = UserMessenger()
backend = FakeSantiago()
qc = QuantumCircuit(4)
qc.h(2)
qc.cx(2, 1)
qc.cx(1, 0)
qc.cx(2, 3)
qc.measure_all()
main(
backend,
msg,
qc,
expectation_operators=["ZZZZ", "IIII", "IZZZ"],
transpiler_config={
"optimization_level": 3,
"layout_method": "sabre",
"routing_method": "sabre",
},
run_config={},
skip_transpilation=False,
return_stddev=False,
use_measurement_mitigation=True,
) | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
If we have done our job correctly, the above should print out two expectation values close to one and a final expectation value close to zero. Program metadataNext we add the needed program data to a dictionary for uploading with our program. | meta = {
"name": "sample-expval",
"description": "A sample expectation value program.",
"max_execution_time": 1000,
"spec": {},
}
meta["spec"]["parameters"] = {
"$schema": "https://json-schema.org/draft/2019-09/schema",
"properties": {
"circuits": {
"description": "A single or list of QuantumCircuits.",
"type": ["array", "object"],
},
"expectation_operators": {
"description": "One or more expectation values to evaluate.",
"type": ["string", "object", "array"],
},
"shots": {"description": "Number of shots per circuit.", "type": "integer"},
"transpiler_config": {
"description": "A collection of kwargs passed to transpile.",
"type": "object",
},
"run_config": {
"description": "A collection of kwargs passed to backend.run. Default is False.",
"type": "object",
"default": False,
},
"return_stddev": {
"description": "Return upper-bound on standard deviation. Default is False.",
"type": "boolean",
"default": False,
},
"use_measurement_mitigation": {
"description": "Use measurement mitigation to improve results. Default is False.",
"type": "boolean",
"default": False,
},
},
"required": ["circuits"],
}
meta["spec"]["return_values"] = {
"$schema": "https://json-schema.org/draft/2019-09/schema",
"description": "A list of expectation values and optionally standard deviations.",
"type": "array",
} | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
Upload the programWe are now in a position to upload the program. To do so we first uncomment and excute the line `%%writefile sample_expval.py` giving use the `sample_expval.py` file we need to upload. | from qiskit_ibm_runtime import IBMRuntimeService
service = IBMRuntimeService(auth="legacy")
program_id = service.upload_program(data="sample_expval.py", metadata=meta)
program_id | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
Delete program if needed | # service.delete_program(program_id) | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
Wrapping the runtime programAs always, it is best to wrap the call to the runtime program with a function (or possibly a class) that makes input easier and does some validation. | def expectation_value_runner(
backend,
circuits,
expectation_operators="",
shots=8192,
transpiler_config={},
run_config={},
skip_transpilation=False,
return_stddev=False,
use_measurement_mitigation=False,
):
"""Compute expectation values for a list of operators after
executing a list of circuits on the target backend.
Parameters:
backend (Backend or str): Qiskit backend instance or name.
circuits: (QuantumCircuit or list): A single or list of QuantumCircuits.
expectation_operators (str or dict or list): Expectation values to evaluate.
shots (int): Number of shots to take per circuit.
transpiler_config (dict): A collection of kwargs passed to transpile().
run_config (dict): A collection of kwargs passed to backend.run().
return_stddev (bool): Return upper bound on standard devitation,
default=False.
skip_transpilation (bool): Skip transpiling of circuits, default=False.
use_measurement_mitigation (bool): Improve resulting using measurement
error mitigation, default=False.
Returns:
array_like: Returns array of expectation values or a list of (expval, stddev)
pairs if return_stddev=True.
"""
if not isinstance(backend, str):
backend = backend.name
options = {"backend_name": backend}
if isinstance(circuits, list) and len(circuits) != 1:
if isinstance(expectation_operators, list):
if len(circuits) != 1 and len(expectation_operators) == 1:
expectation_operators = expectation_operators * len(circuits)
elif len(circuits) != len(expectation_operators):
raise ValueError(
"Number of circuits must match number of expectation \
values if more than one of each"
)
inputs = {}
inputs["circuits"] = circuits
inputs["expectation_operators"] = expectation_operators
inputs["shots"] = shots
inputs["transpiler_config"] = transpiler_config
inputs["run_config"] = run_config
inputs["return_stddev"] = return_stddev
inputs["skip_transpilation"] = skip_transpilation
inputs["use_measurement_mitigation"] = use_measurement_mitigation
return service.run(program_id, options=options, inputs=inputs) | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
Trying it outLets try running the program here with our previously made GHZ state and running on the simulator. | backend = "ibmq_qasm_simulator"
all_zeros_proj = {"0000": 1}
all_ones_proj = {"1111": 1}
job = expectation_value_runner(backend, qc, [all_zeros_proj, all_ones_proj, "ZZZZ"])
job.result() | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
The first two projectors should be nearly $0.50$ as they tell use the probability of being in the all zeros and ones states, respectively, which should be 50/50 for our GHZ state. The final expectation value of `ZZZZ` should be one since this is a GHZ over an even number of qubits. It should go close to zero for an odd number. | qc2 = QuantumCircuit(3)
qc2.h(2)
qc2.cx(2, 1)
qc2.cx(1, 0)
qc2.measure_all()
all_zeros_proj = {"000": 1}
all_ones_proj = {"111": 1}
job2 = expectation_value_runner(backend, qc2, [all_zeros_proj, all_ones_proj, "ZZZ"])
job2.result() | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
Quantum Volume as an expectation valueHere we formulate QV as an expectation value of a projector onto the heavy-output elements on a distribution. We can then use our expectation value routine to compute whether a given circuit has passed the QV metric.QV is defined in terms of heavy-ouputs of a distribution. Heavy-outputs are those bit-strings that are those that have probabilities above the median value of the distribution. Below we define the projection operator onto the set of bit-strings that are heavy-outputs for a given distribution. | def heavy_projector(qv_probs):
"""Forms the projection operator onto the heavy-outputs of a given probability distribution.
Parameters:
qv_probs (dict): A dictionary of bitstrings and associated probabilities.
Returns:
dict: Projector onto the heavy-set.
"""
median_prob = np.median(list(qv_probs.values()))
heavy_strs = {}
for key, val in qv_probs.items():
if val > median_prob:
heavy_strs[key] = 1
return heavy_strs | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
Now we generate 10 QV circuits as our dataset. | import numpy as np
from qiskit.quantum_info import Statevector
from qiskit.circuit.library import QuantumVolume
# Generate QV circuits
N = 10
qv_circs = [QuantumVolume(5) for _ in range(N)] | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
Next, we have to determine the heavy-set of each circuit from the ideal answer, and then pass this along to our heavy-set projector function that we defined above. | ideal_probs = [
Statevector.from_instruction(circ).probabilities_dict() for circ in qv_circs
]
heavy_projectors = [heavy_projector(probs) for probs in ideal_probs] | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
QV circuits have no meaasurements on them so need to add them: | circs = [circ.measure_all(inplace=False) for circ in qv_circs] | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
With a list of circuits and projection operators we now need only to pass both sets to our above expection value runner targeting the desired backend. We will also set the best transpiler arguments to give us a sporting chance of getting some passing scores. | backend = "ibmq_manila"
job3 = expectation_value_runner(
backend,
circs,
heavy_projectors,
transpiler_config={
"optimization_level": 3,
"layout_method": "sabre",
"routing_method": "sabre",
},
)
qv_scores = job3.result()
qv_scores | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
A passing QV score is one where the value of the heavy-set projector is above $2/3$. So let us see who passed: | qv_scores > 2 / 3
from qiskit.tools.jupyter import *
%qiskit_copyright | _____no_output_____ | Apache-2.0 | docs/tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb | daka1510/qiskit-ibm-runtime |
Plagiarism Detection ModelNow that you've created training and test data, you are ready to define and train a model. Your goal in this notebook, will be to train a binary classification model that learns to label an answer file as either plagiarized or not, based on the features you provide the model.This task will be broken down into a few discrete steps:* Upload your data to S3.* Define a binary classification model and a training script.* Train your model and deploy it.* Evaluate your deployed classifier and answer some questions about your approach.To complete this notebook, you'll have to complete all given exercises and answer all the questions in this notebook.> All your tasks will be clearly labeled **EXERCISE** and questions as **QUESTION**.It will be up to you to explore different classification models and decide on a model that gives you the best performance for this dataset.--- Load Data to S3In the last notebook, you should have created two files: a `training.csv` and `test.csv` file with the features and class labels for the given corpus of plagiarized/non-plagiarized text data. >The below cells load in some AWS SageMaker libraries and creates a default bucket. After creating this bucket, you can upload your locally stored data to S3.Save your train and test `.csv` feature files, locally. To do this you can run the second notebook "2_Plagiarism_Feature_Engineering" in SageMaker or you can manually upload your files to this notebook using the upload icon in Jupyter Lab. Then you can upload local files to S3 by using `sagemaker_session.upload_data` and pointing directly to where the training data is saved. | import pandas as pd
import boto3
import sagemaker
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# session and role
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
# create an S3 bucket
bucket = sagemaker_session.default_bucket() | _____no_output_____ | MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
EXERCISE: Upload your training data to S3Specify the `data_dir` where you've saved your `train.csv` file. Decide on a descriptive `prefix` that defines where your data will be uploaded in the default S3 bucket. Finally, create a pointer to your training data by calling `sagemaker_session.upload_data` and passing in the required parameters. It may help to look at the [Session documentation](https://sagemaker.readthedocs.io/en/stable/session.htmlsagemaker.session.Session.upload_data) or previous SageMaker code examples.You are expected to upload your entire directory. Later, the training script will only access the `train.csv` file. | # should be the name of directory you created to save your features data
data_dir = 'plagiarism_data'
# set prefix, a descriptive name for a directory
prefix = 'plagiarism'
# upload all data to S3
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
input_data
| _____no_output_____ | MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
Test cellTest that your data has been successfully uploaded. The below cell prints out the items in your S3 bucket and will throw an error if it is empty. You should see the contents of your `data_dir` and perhaps some checkpoints. If you see any other files listed, then you may have some old model files that you can delete via the S3 console (though, additional files shouldn't affect the performance of model developed in this notebook). | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# confirm that data is in S3 bucket
empty_check = []
for obj in boto3.resource('s3').Bucket(bucket).objects.all():
empty_check.append(obj.key)
print(obj.key)
assert len(empty_check) !=0, 'S3 bucket is empty.'
print('Test passed!') | boston-xgboost-HL/output/sagemaker-xgboost-2019-11-03-22-54-54-949/output/model.tar.gz
boston-xgboost-HL/output/sagemaker-xgboost-2019-11-04-00-24-26-893/output/model.tar.gz
boston-xgboost-HL/test.csv
boston-xgboost-HL/train.csv
boston-xgboost-HL/validation.csv
ggwm_sagemaker/sentiment_rnn/train.csv
ggwm_sagemaker/sentiment_rnn/word_dict.pkl
plagiarism/test.csv
plagiarism/train.csv
sagemaker-pytorch-2019-11-10-02-05-57-682/output/model.tar.gz
sagemaker-pytorch-2019-11-10-02-05-57-682/source/sourcedir.tar.gz
sagemaker-pytorch-2019-11-10-03-12-30-854/sourcedir.tar.gz
sagemaker-pytorch-2019-11-10-20-16-07-445/output/model.tar.gz
sagemaker-pytorch-2019-11-10-20-16-07-445/source/sourcedir.tar.gz
sagemaker-pytorch-2019-11-10-21-22-40-738/sourcedir.tar.gz
sagemaker-xgboost-2019-11-04-00-32-31-041/test.csv.out
sentiment-xgboost/output/xgboost-2019-11-05-02-36-56-572/output/model.tar.gz
sentiment-xgboost/test.csv
sentiment-xgboost/train.csv
sentiment-xgboost/validation.csv
xgboost-2019-11-05-03-02-16-631/test.csv.out
Test passed!
| MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
--- ModelingNow that you've uploaded your training data, it's time to define and train a model!The type of model you create is up to you. For a binary classification task, you can choose to go one of three routes:* Use a built-in classification algorithm, like LinearLearner.* Define a custom Scikit-learn classifier, a comparison of models can be found [here](https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html).* Define a custom PyTorch neural network classifier. It will be up to you to test out a variety of models and choose the best one. Your project will be graded on the accuracy of your final model. --- EXERCISE: Complete a training script To implement a custom classifier, you'll need to complete a `train.py` script. You've been given the folders `source_sklearn` and `source_pytorch` which hold starting code for a custom Scikit-learn model and a PyTorch model, respectively. Each directory has a `train.py` training script. To complete this project **you only need to complete one of these scripts**; the script that is responsible for training your final model.A typical training script:* Loads training data from a specified directory* Parses any training & model hyperparameters (ex. nodes in a neural network, training epochs, etc.)* Instantiates a model of your design, with any specified hyperparams* Trains that model * Finally, saves the model so that it can be hosted/deployed, later Defining and training a modelMuch of the training script code is provided for you. Almost all of your work will be done in the `if __name__ == '__main__':` section. To complete a `train.py` file, you will:1. Import any extra libraries you need2. Define any additional model training hyperparameters using `parser.add_argument`2. Define a model in the `if __name__ == '__main__':` section3. Train the model in that same sectionBelow, you can use `!pygmentize` to display an existing `train.py` file. Read through the code; all of your tasks are marked with `TODO` comments. **Note: If you choose to create a custom PyTorch model, you will be responsible for defining the model in the `model.py` file,** and a `predict.py` file is provided. If you choose to use Scikit-learn, you only need a `train.py` file; you may import a classifier from the `sklearn` library. | # directory can be changed to: source_sklearn or source_pytorch
!pygmentize source_sklearn/train.py | [34mfrom[39;49;00m [04m[36m__future__[39;49;00m [34mimport[39;49;00m print_function
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
[34mfrom[39;49;00m [04m[36msklearn.externals[39;49;00m [34mimport[39;49;00m joblib
[37m## TODO: Import any additional libraries you need to define a model[39;49;00m
[37m# Provided model load function[39;49;00m
[34mdef[39;49;00m [32mmodel_fn[39;49;00m(model_dir):
[33m"""Load model from the model_dir. This is the same model that is saved[39;49;00m
[33m in the main if statement.[39;49;00m
[33m """[39;49;00m
[34mprint[39;49;00m([33m"[39;49;00m[33mLoading model.[39;49;00m[33m"[39;49;00m)
[37m# load using joblib[39;49;00m
model = joblib.load(os.path.join(model_dir, [33m"[39;49;00m[33mmodel.joblib[39;49;00m[33m"[39;49;00m))
[34mprint[39;49;00m([33m"[39;49;00m[33mDone loading model.[39;49;00m[33m"[39;49;00m)
[34mreturn[39;49;00m model
[37m## TODO: Complete the main code[39;49;00m
[34mif[39;49;00m [31m__name__[39;49;00m == [33m'[39;49;00m[33m__main__[39;49;00m[33m'[39;49;00m:
[37m# All of the model parameters and training parameters are sent as arguments[39;49;00m
[37m# when this script is executed, during a training job[39;49;00m
[37m# Here we set up an argument parser to easily access the parameters[39;49;00m
parser = argparse.ArgumentParser()
[37m# SageMaker parameters, like the directories for training data and saving models; set automatically[39;49;00m
[37m# Do not need to change[39;49;00m
parser.add_argument([33m'[39;49;00m[33m--output-data-dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ[[33m'[39;49;00m[33mSM_OUTPUT_DATA_DIR[39;49;00m[33m'[39;49;00m])
parser.add_argument([33m'[39;49;00m[33m--model-dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ[[33m'[39;49;00m[33mSM_MODEL_DIR[39;49;00m[33m'[39;49;00m])
parser.add_argument([33m'[39;49;00m[33m--data-dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ[[33m'[39;49;00m[33mSM_CHANNEL_TRAIN[39;49;00m[33m'[39;49;00m])
[37m## TODO: Add any additional arguments that you will need to pass into your model[39;49;00m
[37m# args holds all passed-in arguments[39;49;00m
args = parser.parse_args()
[37m# Read in csv training file[39;49;00m
training_dir = args.data_dir
train_data = pd.read_csv(os.path.join(training_dir, [33m"[39;49;00m[33mtrain.csv[39;49;00m[33m"[39;49;00m), header=[36mNone[39;49;00m, names=[36mNone[39;49;00m)
[37m# Labels are in the first column[39;49;00m
train_y = train_data.iloc[:,[34m0[39;49;00m]
train_x = train_data.iloc[:,[34m1[39;49;00m:]
[37m## --- Your code here --- ##[39;49;00m
[37m## TODO: Define a model [39;49;00m
model = [36mNone[39;49;00m
[37m## TODO: Train the model[39;49;00m
[37m## --- End of your code --- ##[39;49;00m
[37m# Save the trained model[39;49;00m
joblib.dump(model, os.path.join(args.model_dir, [33m"[39;49;00m[33mmodel.joblib[39;49;00m[33m"[39;49;00m))
| MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
Provided codeIf you read the code above, you can see that the starter code includes a few things:* Model loading (`model_fn`) and saving code* Getting SageMaker's default hyperparameters* Loading the training data by name, `train.csv` and extracting the features and labels, `train_x`, and `train_y`If you'd like to read more about model saving with [joblib for sklearn](https://scikit-learn.org/stable/modules/model_persistence.html) or with [torch.save](https://pytorch.org/tutorials/beginner/saving_loading_models.html), click on the provided links. --- Create an EstimatorWhen a custom model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained; the `train.py` function you specified above. To run a custom training script in SageMaker, construct an estimator, and fill in the appropriate constructor arguments:* **entry_point**: The path to the Python script SageMaker runs for training and prediction.* **source_dir**: The path to the training script directory `source_sklearn` OR `source_pytorch`.* **entry_point**: The path to the Python script SageMaker runs for training and prediction.* **source_dir**: The path to the training script directory `train_sklearn` OR `train_pytorch`.* **entry_point**: The path to the Python script SageMaker runs for training.* **source_dir**: The path to the training script directory `train_sklearn` OR `train_pytorch`.* **role**: Role ARN, which was specified, above.* **train_instance_count**: The number of training instances (should be left at 1).* **train_instance_type**: The type of SageMaker instance for training. Note: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types.* **sagemaker_session**: The session used to train on Sagemaker.* **hyperparameters** (optional): A dictionary `{'name':value, ..}` passed to the train function as hyperparameters.Note: For a PyTorch model, there is another optional argument **framework_version**, which you can set to the latest version of PyTorch, `1.0`. EXERCISE: Define a Scikit-learn or PyTorch estimatorTo import your desired estimator, use one of the following lines:```from sagemaker.sklearn.estimator import SKLearn``````from sagemaker.pytorch import PyTorch``` |
# your import and estimator code, here
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point="train.py",
source_dir="source_sklearn",
train_instance_type="ml.c4.xlarge",
role=role,
sagemaker_session=sagemaker_session,
hyperparameters={'n_estimators': 50 , 'max_depth':5}) | _____no_output_____ | MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
EXERCISE: Train the estimatorTrain your estimator on the training data stored in S3. This should create a training job that you can monitor in your SageMaker console. | %%time
# Train your estimator on S3 training data
sklearn.fit({'train': input_data}) | 2019-12-13 23:28:35 Starting - Starting the training job...
2019-12-13 23:28:36 Starting - Launching requested ML instances......
2019-12-13 23:29:41 Starting - Preparing the instances for training...
2019-12-13 23:30:24 Downloading - Downloading input data...
2019-12-13 23:30:56 Training - Downloading the training image..[34m2019-12-13 23:31:16,696 sagemaker-containers INFO Imported framework sagemaker_sklearn_container.training[0m
[34m2019-12-13 23:31:16,699 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2019-12-13 23:31:16,709 sagemaker_sklearn_container.training INFO Invoking user training script.[0m
[34m2019-12-13 23:31:16,965 sagemaker-containers INFO Module train does not provide a setup.py. [0m
[34mGenerating setup.py[0m
[34m2019-12-13 23:31:16,965 sagemaker-containers INFO Generating setup.cfg[0m
[34m2019-12-13 23:31:16,965 sagemaker-containers INFO Generating MANIFEST.in[0m
[34m2019-12-13 23:31:16,965 sagemaker-containers INFO Installing module with the following command:[0m
[34m/miniconda3/bin/python -m pip install . [0m
[34mProcessing /opt/ml/code[0m
[34mBuilding wheels for collected packages: train
Building wheel for train (setup.py): started
Building wheel for train (setup.py): finished with status 'done'
Created wheel for train: filename=train-1.0.0-py2.py3-none-any.whl size=7116 sha256=8175331681d77ae08b5a7edf807529aacb856bbfadbaf975cc9d84a614322f0d
Stored in directory: /tmp/pip-ephem-wheel-cache-pn0jy4mv/wheels/35/24/16/37574d11bf9bde50616c67372a334f94fa8356bc7164af8ca3[0m
[34mSuccessfully built train[0m
[34mInstalling collected packages: train[0m
[34mSuccessfully installed train-1.0.0[0m
[34m2019-12-13 23:31:18,339 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2019-12-13 23:31:18,350 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"additional_framework_parameters": {},
"channel_input_dirs": {
"train": "/opt/ml/input/data/train"
},
"current_host": "algo-1",
"framework_module": "sagemaker_sklearn_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {
"max_depth": 5,
"n_estimators": 50
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"train": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "sagemaker-scikit-learn-2019-12-13-23-28-34-846",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-east-1-088315321151/sagemaker-scikit-learn-2019-12-13-23-28-34-846/source/sourcedir.tar.gz",
"module_name": "train",
"network_interface_name": "eth0",
"num_cpus": 4,
"num_gpus": 0,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "train.py"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_HOSTS=["algo-1"][0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_HPS={"max_depth":5,"n_estimators":50}[0m
[34mSM_USER_ENTRY_POINT=train.py[0m
[34mSM_FRAMEWORK_PARAMS={}[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}[0m
[34mSM_INPUT_DATA_CONFIG={"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_CHANNELS=["train"][0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_MODULE_NAME=train[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_sklearn_container.training:main[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_NUM_CPUS=4[0m
[34mSM_NUM_GPUS=0[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-east-1-088315321151/sagemaker-scikit-learn-2019-12-13-23-28-34-846/source/sourcedir.tar.gz[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"train":"/opt/ml/input/data/train"},"current_host":"algo-1","framework_module":"sagemaker_sklearn_container.training:main","hosts":["algo-1"],"hyperparameters":{"max_depth":5,"n_estimators":50},"input_config_dir":"/opt/ml/input/config","input_data_config":{"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"sagemaker-scikit-learn-2019-12-13-23-28-34-846","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-088315321151/sagemaker-scikit-learn-2019-12-13-23-28-34-846/source/sourcedir.tar.gz","module_name":"train","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"train.py"}[0m
[34mSM_USER_ARGS=["--max_depth","5","--n_estimators","50"][0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_CHANNEL_TRAIN=/opt/ml/input/data/train[0m
[34mSM_HP_MAX_DEPTH=5[0m
[34mSM_HP_N_ESTIMATORS=50[0m
[34mPYTHONPATH=/miniconda3/bin:/miniconda3/lib/python37.zip:/miniconda3/lib/python3.7:/miniconda3/lib/python3.7/lib-dynload:/miniconda3/lib/python3.7/site-packages
[0m
[34mInvoking script with the following command:
[0m
[34m/miniconda3/bin/python -m train --max_depth 5 --n_estimators 50
[0m
[34m/miniconda3/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp[0m
[34m2019-12-13 23:31:19,891 sagemaker-containers INFO Reporting training SUCCESS[0m
2019-12-13 23:31:28 Uploading - Uploading generated training model
2019-12-13 23:31:28 Completed - Training job completed
Training seconds: 64
Billable seconds: 64
CPU times: user 388 ms, sys: 19.7 ms, total: 408 ms
Wall time: 3min 11s
| MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
EXERCISE: Deploy the trained modelAfter training, deploy your model to create a `predictor`. If you're using a PyTorch model, you'll need to create a trained `PyTorchModel` that accepts the trained `.model_data` as an input parameter and points to the provided `source_pytorch/predict.py` file as an entry point. To deploy a trained model, you'll use `.deploy`, which takes in two arguments:* **initial_instance_count**: The number of deployed instances (1).* **instance_type**: The type of SageMaker instance for deployment.Note: If you run into an instance error, it may be because you chose the wrong training or deployment instance_type. It may help to refer to your previous exercise code to see which types of instances we used. | %%time
# uncomment, if needed
# from sagemaker.pytorch import PyTorchModel
# deploy your model to create a predictor
predictor = sklearn.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge") | ---------------------------------------------------------------------------------------------------------------!CPU times: user 600 ms, sys: 0 ns, total: 600 ms
Wall time: 9min 27s
| MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
--- Evaluating Your ModelOnce your model is deployed, you can see how it performs when applied to our test data.The provided cell below, reads in the test data, assuming it is stored locally in `data_dir` and named `test.csv`. The labels and features are extracted from the `.csv` file. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import os
# read in test data, assuming it is stored locally
test_data = pd.read_csv(os.path.join(data_dir, "test.csv"), header=None, names=None)
# labels are in the first column
test_y = test_data.iloc[:,0]
test_x = test_data.iloc[:,1:] | _____no_output_____ | MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
EXERCISE: Determine the accuracy of your modelUse your deployed `predictor` to generate predicted, class labels for the test data. Compare those to the *true* labels, `test_y`, and calculate the accuracy as a value between 0 and 1.0 that indicates the fraction of test data that your model classified correctly. You may use [sklearn.metrics](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.metrics) for this calculation.**To pass this project, your model should get at least 90% test accuracy.** | # First: generate predicted, class labels
test_y_preds = predictor.predict(test_x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test that your model generates the correct number of labels
assert len(test_y_preds)==len(test_y), 'Unexpected number of predictions.'
print('Test passed!')
from sklearn import metrics
# Second: calculate the test accuracy
accuracy = metrics.accuracy_score(y_true=test_y, y_pred=test_y_preds)
print(accuracy)
## print out the array of predicted and true labels, if you want
print('\nPredicted class labels: ')
print(test_y_preds)
print('\nTrue class labels: ')
print(test_y.values) | 1.0
Predicted class labels:
[1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 1 0 1 1 0 0]
True class labels:
[1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 1 0 1 1 0 0]
| MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
Question 1: How many false positives and false negatives did your model produce, if any? And why do you think this is? ** Answer**: i see 100% accuracy with no false positivies and no false negatives.there could be overfitting possible data leakage Question 2: How did you decide on the type of model to use? ** Answer**:I will start with tree based models for classifications. I have seen tree based models giving consistently better performance. we can also try logistic regression. ---- EXERCISE: Clean up ResourcesAfter you're done evaluating your model, **delete your model endpoint**. You can do this with a call to `.delete_endpoint()`. You need to show, in this notebook, that the endpoint was deleted. Any other resources, you may delete from the AWS console, and you will find more instructions on cleaning up all your resources, below. | # uncomment and fill in the line below!
# <name_of_deployed_predictor>.delete_endpoint()
predictor.delete_endpoint() | _____no_output_____ | MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
Deleting S3 bucketWhen you are *completely* done with training and testing models, you can also delete your entire S3 bucket. If you do this before you are done training your model, you'll have to recreate your S3 bucket and upload your training data again. | # deleting bucket, uncomment lines below
bucket_to_delete = boto3.resource('s3').Bucket(bucket)
bucket_to_delete.objects.all().delete() | _____no_output_____ | MIT | Project_Plagiarism_Detection/3_Training_a_Model.ipynb | gowtham91m/ML_SageMaker_Studies |
Scikit-Learn * É considerada como a biblitoca de Python mais utilizada para a implementação de métodos baseados em algoritmos de aprendizagem de máquina (*machine learning*).* A versão atual é a 0.24.2 (abril 2021).* URL: http://scikit-learn.org Formulação do Problema * Problema de classificação **supervisionada** de texto.* Hoje iremos investigar o método de aprendizagem de máquina que seja mais apropriado para resolvê-lo.* Considere um site de notícias que publica matérias jornalísticas de vários temas. * Economia, saúde e esportes são exemplos de temas. * O objetivo é criar um método classificador que receba um texto de entrada e consiga identificar qual é o assunto do texto.* O classificador assume que cada texto está associado a um tema.* É um problema de classificação de texto multiclasses.  Exploração dos Dados Carregar *dataset* | import pandas as pd
df = pd.read_csv('http://tiagodemelo.info/datasets/dataset-uol.csv')
df.head()
df.shape
df.CATEGORIA.unique() | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Verificar dados nulos (NaN) | df.isnull().any()
df.isnull().sum()
index_with_nan = df.index[df.isnull().any(axis=1)]
index_with_nan.shape
df.drop(index_with_nan, 0, inplace=True)
df.shape | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Adicionar coluna ao *dataset* | ids_categoria = df['CATEGORIA'].factorize()[0]
df['ID_CATEGORIA'] = ids_categoria
df.head(n=10)
df.ID_CATEGORIA.unique()
column_values = df[["ID_CATEGORIA", "CATEGORIA"]].values.ravel()
unique_values = pd.unique(column_values)
print(unique_values)
category_id_df = df[['CATEGORIA', 'ID_CATEGORIA']].drop_duplicates().sort_values('ID_CATEGORIA')
id_to_category = dict(category_id_df[['ID_CATEGORIA', 'CATEGORIA']].values)
id_to_category | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Distribuição das notícias entre as categorias | import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8,6))
df.groupby('CATEGORIA').TEXTO.count().plot.bar()
plt.show() | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
* Um problema recorrente é o **desbalanceamento das classes**.* Os algoritmos convencionais tendem a favorecer as classes mais frequentes, ou seja, não consideram as classes menos frequentes.* As classes menos frequentes costumam ser tratadas como *outliers*.* Estratégias de *undersampling* ou *oversampling* são aplicadas para tratar desse problema [[1]](https://en.wikipedia.org/wiki/Oversampling_and_undersampling_in_data_analysis).* Lidar com essas estratégias será discutido posteriormente. Preparar dataset para que todas as categorias tenham a mesma quantidade de publicações | TAMANHO_DATASET = 200 #quantidade de artigos por classe.
categorias = list(set(df['ID_CATEGORIA']))
data = []
for cat in categorias:
total = TAMANHO_DATASET
for c,t,i in zip(df['CATEGORIA'], df['TEXTO'], df['ID_CATEGORIA']):
if total>0 and cat == i:
total-=1
data.append([c,t,i])
df = pd.DataFrame(data, columns=['CATEGORIA','TEXTO','ID_CATEGORIA'])
fig = plt.figure(figsize=(8,6))
df.groupby('CATEGORIA').TEXTO.count().plot.bar(ylim=0)
plt.show() | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Representação do Texto * Os métodos de aprendizagem de máquina lidam melhor com representações numéricas ao invés de representação textual.* Portanto, os textos precisam ser convertidos.* *Bag of words* é uma forma comum de representar os textos.* Nós vamos calcular a medida de *Term Frequency* e *Inverse Document Frequency*, abreviada como **TF-IFD**.* Nós usaremos o `sklearn.feature_extraction.text.TfidfVectorizer` para calcular o `tf-idf`. *Bag of Words* É uma representação de texto comumente usada em problemas relacionados com processamento de linguagem natural e recuperação da informação. sentença 1: "Os brasileiros gostam de futebol"sentença 2: "Os americanos adoram futebol e adoram basquete" | import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize
sentenca1 = "Os brasileiros gostam de futebol"
sentenca2 = "Os americanos adoram futebol e adoram basquete"
texto1 = word_tokenize(sentenca1)
texto2 = word_tokenize(sentenca2)
print (texto1)
print (texto2)
from nltk.probability import FreqDist
fdist1 = FreqDist(texto1)
fdist2 = FreqDist(texto2)
print(fdist1.most_common())
print(fdist2.most_common())
texto = texto1 + texto2
fdist = FreqDist(texto)
print(fdist.most_common()) | [('Os', 2), ('futebol', 2), ('adoram', 2), ('brasileiros', 1), ('gostam', 1), ('de', 1), ('americanos', 1), ('e', 1), ('basquete', 1)]
| MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
sentença 1: "Os brasileiros gostam de futebol"sentença 2: "Os americanos adoram futebol e adoram basquete"  Sentença 1: [1 1 0 1 1 1 0 0 0]Sentença 2: [1 1 2 0 0 0 1 1 1] TF-IDF TF representa a frequência do termo.IDF representa o inverso da frequência nos documentos. Texto no SKLearn Opções (paramêtros) utilizados:* `min_df` é o número mínimo de documentos que uma palavra deve estar presente.* `encoding` é usado para que o classificador consiga lidar com caracteres especiais.* `ngram_range` é definida para considerar unigramas e bigramas.* `stop_words` é definida para reduzir o número de termos indesejáveis. | from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
tfidf = TfidfVectorizer(min_df=5, encoding='latin-1', ngram_range=(1, 2), stop_words=stopwords.words('portuguese'))
features = tfidf.fit_transform(df.TEXTO.values.astype('U')).toarray()
labels = df.ID_CATEGORIA
features.shape | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Nós podemos usar o `sklearn.feature_selection.chi2` para achar os termos que estão mais correlacionados com cada categoria. | from sklearn.feature_selection import chi2
import numpy as np
N = 2
for Categoria, category_id in sorted(id_to_category.items()):
features_chi2 = chi2(features, labels == category_id)
indices = np.argsort(features_chi2[0])
feature_names = np.array(tfidf.get_feature_names())[indices]
unigrams = [v for v in feature_names if len(v.split(' ')) == 1]
bigrams = [v for v in feature_names if len(v.split(' ')) == 2]
print("# '{}':".format(Categoria))
print(" . Unigramas mais correlacionados:\n. {}".format('\n. '.join(unigrams[-N:])))
print(" . Bigramas mais correlacionados:\n. {}".format('\n. '.join(bigrams[-N:]))) | # '0':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '1':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '2':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '3':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '4':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '5':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '6':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
# '7':
. Unigramas mais correlacionados:
. equipe
. único
. Bigramas mais correlacionados:
. duas vezes
. entrevista coletiva
| MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Criação de Classificador Importar bibliotecas: | from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Dividir *dataset* em **treino** e **teste** | X_train, X_test, y_train, y_test = train_test_split(df['TEXTO'], df['CATEGORIA'], test_size=0.2, random_state = 0) | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Criar um modelo (Naive Bayes) | count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train.values.astype('U'))
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
clf = MultinomialNB().fit(X_train_tfidf, y_train) | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Testar o classificador criado: | sentenca = 'O brasileiro gosta de futebol.'
print(clf.predict(count_vect.transform([sentenca]))) | ['esporte']
| MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Seleção do Modelo Nós agora vamos experimentar diferentes modelos de aprendizagem de máquina e avaliar a sua acurácia.Serão considerados os seguintes modelos:* Logistic Regression (LR)* Multinomial Naive Bayes (NB)* Linear Support Vector Machine (SVM)* Random Forest (RF) Importar bibliotecas: | from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Lista com os modelos: | models = [
RandomForestClassifier(n_estimators=200, max_depth=3, random_state=0),
LinearSVC(C=10, class_weight=None, dual=True, fit_intercept=True, intercept_scaling=1, loss='squared_hinge', max_iter=1000, multi_class='ovr', penalty='l2', random_state=None, tol=0.0001, verbose=0),
MultinomialNB(),
LogisticRegression(random_state=0),
] | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Validação Cruzada A validação cruzada é um método de reamostragem e tem como objetivo avaliar a capacidade de **generalização** do modelo. Normalmente a distribuição entre treino e teste é feita da seguinte forma:  Na validação cruzada:  Uso de validação cruzada com 5 *folds*: | CV = 5 | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Geração dos modelos: | cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, features, labels, scoring='accuracy', cv=CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy']) | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Gráfico BoxPlot  Plotar o gráfico comparativo: | import seaborn as sns
sns.boxplot(x='model_name', y='accuracy', data=cv_df)
sns.stripplot(x='model_name', y='accuracy', data=cv_df,
size=8, jitter=True, edgecolor="gray", linewidth=2)
plt.show() | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Acurácia média entre os 5 modelos: | cv_df.groupby('model_name').accuracy.mean() | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Matriz de Confusão Geração de modelo baseado em SVM: | model = LinearSVC()
X_train, X_test, y_train, y_test, indices_train, indices_test = train_test_split(features, labels, df.index, test_size=0.33, random_state=0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test) | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Plotar matriz de confusão para o modelo SVM: | from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(y_test, y_pred)
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(conf_mat, annot=True, fmt='d',
xticklabels=category_id_df.CATEGORIA.values, yticklabels=category_id_df.CATEGORIA.values)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show() | _____no_output_____ | MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Vamos avaliar os resultados incorretos.Apresentação de textos classificados incorretamente: | from IPython.display import display
for predicted in category_id_df.ID_CATEGORIA:
for actual in category_id_df.ID_CATEGORIA:
if predicted != actual and conf_mat[actual, predicted] >= 5:
print("'{}' predicted as '{}' : {} examples.".format(id_to_category[actual], id_to_category[predicted], conf_mat[actual, predicted]))
display(df.loc[indices_test[(y_test == actual) & (y_pred == predicted)]][['CATEGORIA', 'TEXTO']])
print('') | 'esporte' predicted as 'coronavirus' : 6 examples.
| MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
Reportar o resultado do classificador em cada classe: | from sklearn import metrics
print(metrics.classification_report(y_test, y_pred, target_names=df['CATEGORIA'].unique())) | precision recall f1-score support
coronavirus 0.35 0.26 0.30 66
politica 0.67 0.69 0.68 58
esporte 0.70 0.58 0.63 78
carro 0.65 0.50 0.56 70
educacao 0.48 0.74 0.58 62
entretenimento 0.55 0.52 0.54 71
economia 0.47 0.54 0.50 56
saude 0.59 0.66 0.62 67
accuracy 0.56 528
macro avg 0.56 0.56 0.55 528
weighted avg 0.56 0.56 0.55 528
| MIT | MO_Aula_SciKit.ipynb | anarossati/Mineracao_dados_Ocean_05_2021 |
UUV Single-Agent Environment Creating the environment | # initialize setup
from fimdpenv import setup, UUVEnv
setup()
# create envionment
from fimdpenv.UUVEnv import SingleAgentEnv
env = SingleAgentEnv(grid_size=[20,20], capacity=50, reloads=[22,297], targets=[67, 345, 268], init_state=350, enhanced_actionspace=0)
env
# get consmdp
mdp, targets = env.get_consmdp()
# list of targets
env.targets
# list of reloads
env.reloads
# converting state_id to grid coordinates
env.get_state_coord(265)
# converting grid coordinates to state_id
env.get_state_id(13,5) | _____no_output_____ | MIT | examples/UUVEnv_singleagent.ipynb | pthangeda/FiMDPEnv |
Creating and visualizing strategies | # generate strategies
from fimdp.objectives import BUCHI
from fimdp.energy_solvers import GoalLeaningES
from fimdp.core import CounterStrategy
env.create_counterstrategy(solver=GoalLeaningES, objective=BUCHI, threshold=0.1)
# animate strategies
env.animate_simulation(num_steps=50, interval=100)
# histories
#env.state_histories
#env.action_histories
env.target_histories | _____no_output_____ | MIT | examples/UUVEnv_singleagent.ipynb | pthangeda/FiMDPEnv |
Resetting strategies and single transition | env.reset(init_state=98, reset_energy=20)
env
env.strategies
env.state_histories
env.step()
env
env.state_histories | _____no_output_____ | MIT | examples/UUVEnv_singleagent.ipynb | pthangeda/FiMDPEnv |
Enhanced action space | # create envionment with enchanced action space
from fimdpenv.UUVEnv import SingleAgentEnv
env2 = SingleAgentEnv(grid_size=[20,20], capacity=50, reloads=[22,297, 198, 76], targets=[67, 345, 268], init_state=350, enhanced_actionspace=1)
env2
# create strategies
from fimdp.objectives import BUCHI
from fimdp.energy_solvers import GoalLeaningES
from fimdp.core import CounterStrategy
mdp, targets = env2.get_consmdp()
env2.create_counterstrategy()
# animate strategies
env2.animate_simulation(num_steps=50, interval=100) | _____no_output_____ | MIT | examples/UUVEnv_singleagent.ipynb | pthangeda/FiMDPEnv |
Configurable closed-loop optimization with Ax `Scheduler`*We recommend reading through the ["Developer API" tutorial](https://ax.dev/tutorials/gpei_hartmann_developer.html) before getting started with the `Scheduler`, as using it in this tutorial will require an Ax `Experiment` and an understanding of the experiment's subcomponents like the search space and the runner.* Contents:1. **Scheduler and external systems for trial evalution** –– overview of how scheduler works with an external system to run a closed-loop optimization.2. **Set up a mock external system** –– creating a dummy external system client, which will be used to illustrate a scheduler setup in this tutorial.3. **Set up an experiment according to the mock external system** –– set up a runner that deploys trials to the dummy external system from part 2 and a metric that fetches trial results from that system, then leverage those runner and metric and set up an experiment.4. **Set up a scheduler**, given an experiment. 1. Create a scheduler subclass to poll trial status. 2. Set up a generation strategy using an auto-selection utility.5. **Running the optimization** via `Scheduler.run_n_trials`.6. **Leveraging SQL storage and experiment resumption** –– resuming an experiment in one line of code.7. **Configuring the scheduler** –– overview of the many options scheduler provides to configure the closed-loop down to granular detail.8. **Advanced functionality**: 1. Reporting results to an external system during the optimization. 2. Using `Scheduler.run_trials_and_yield_results` to run the optimization via a generator method. 1. `Scheduler` and external systems for trial evaluation`Scheduler` is a closed-loop manager class in Ax that continuously deploys trial runs to an arbitrary external system in an asynchronous fashion, polls their status from that system, and leverages known trial results to generate more trials.Key features of the `Scheduler`:- Maintains user-set concurrency limits for trials run in parallel, keep track of tolerated level of failed trial runs, and 'oversee' the optimization in other ways,- Leverages an Ax `Experiment` for optimization setup (an optimization config with metrics, a search space, a runner for trial evaluations),- Uses an Ax `GenerationStrategy` for flexible specification of an optimization algorithm used to generate new trials to run,- Supports SQL storage and allows for easy resumption of stored experiments. This scheme summarizes how the scheduler interacts with any external system used to run trial evaluations: 2. Set up a mock external execution system An example of an 'external system' running trial evaluations could be a remote server executing scheduled jobs, a subprocess conducting ML training runs, an engine running physics simulations, etc. For the sake of example here, let us assume a dummy external system with the following client: | from typing import Any, Dict, NamedTuple, List, Union
from time import time
from random import randint
from ax.core.base_trial import TrialStatus
from ax.utils.measurement.synthetic_functions import branin
class MockJob(NamedTuple):
"""Dummy class to represent a job scheduled on `MockJobQueue`."""
id: int
parameters: Dict[str, Union[str, float, int, bool]]
class MockJobQueueClient:
"""Dummy class to represent a job queue where the Ax `Scheduler` will
deploy trial evaluation runs during optimization.
"""
jobs: Dict[str, MockJob] = {}
def schedule_job_with_parameters(
self,
parameters: Dict[str, Union[str, float, int, bool]]
) -> int:
"""Schedules an evaluation job with given parameters and returns job ID.
"""
# Code to actually schedule the job and produce an ID would go here;
# using timestamp as dummy ID for this example.
job_id = int(time())
self.jobs[job_id] = MockJob(job_id, parameters)
return job_id
def get_job_status(self, job_id: int) -> TrialStatus:
""""Get status of the job by a given ID. For simplicity of the example,
return an Ax `TrialStatus`.
"""
job = self.jobs[job_id]
# Instead of randomizing trial status, code to check actual job status
# would go here.
if randint(0, 3) > 0:
return TrialStatus.COMPLETED
return TrialStatus.RUNNING
def get_outcome_value_for_completed_job(self, job_id: int) -> Dict[str, float]:
"""Get evaluation results for a given completed job."""
job = self.jobs[job_id]
# In a real external system, this would retrieve real relevant outcomes and
# not a synthetic function value.
return {
"branin": branin(job.parameters.get("x1"), job.parameters.get("x2"))
}
MOCK_JOB_QUEUE_CLIENT = MockJobQueueClient()
def get_mock_job_queue_client() -> MockJobQueueClient:
"""Obtain the singleton job queue instance."""
return MOCK_JOB_QUEUE_CLIENT | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
3. Set up an experiment according to the mock external systemAs mentioned above, using a `Scheduler` requires a fully set up experiment with metrics and a runner. Refer to the "Building Blocks of Ax" tutorial to learn more about those components, as here we assume familiarity with them. The following runner and metric set up intractions between the `Scheduler` and the mock external system we assume: | from ax.core.runner import Runner
from ax.core.base_trial import BaseTrial
from ax.core.trial import Trial
class MockJobRunner(Runner): # Deploys trials to external system.
def run(self, trial: BaseTrial) -> Dict[str,Any]:
"""Deploys a trial and returns dict of run metadata."""
if not isinstance(trial, Trial):
raise ValueError("This runner only handles `Trial`.")
mock_job_queue = get_mock_job_queue_client()
job_id = mock_job_queue.schedule_job_with_parameters(
parameters=trial.arm.parameters
)
# This run metadata will be attached to trial as `trial.run_metadata`
# by the base `Scheduler`.
return {"job_id": job_id}
import pandas as pd
from ax.core.metric import Metric
from ax.core.base_trial import BaseTrial
from ax.core.data import Data
class BraninForMockJobMetric(Metric): # Pulls data for trial from external system.
def fetch_trial_data(self, trial: BaseTrial) -> Data:
"""Obtains data via fetching it from ` for a given trial."""
if not isinstance(trial, Trial):
raise ValueError("This metric only handles `Trial`.")
mock_job_queue = get_mock_job_queue_client()
# Here we leverage the "job_id" metadata created by `MockJobRunner.run`.
branin_data = mock_job_queue.get_outcome_value_for_completed_job(
job_id=trial.run_metadata.get("job_id")
)
df_dict = {
"trial_index": trial.index,
"metric_name": "branin",
"arm_name": trial.arm.name,
"mean": branin_data.get("branin"),
# Can be set to 0.0 if function is known to be noiseless
# or to an actual value when SEM is known. Setting SEM to
# `None` results in Ax assuming unknown noise and inferring
# noise level from data.
"sem": None,
}
return Data(df=pd.DataFrame.from_records([df_dict])) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
Now we can set up the experiment using the runner and metric we defined. This experiment will have a single-objective optimization config, minimizing the Branin function, and the search space that corresponds to that function. | from ax import *
def make_branin_experiment_with_runner_and_metric() -> Experiment:
parameters = [
RangeParameter(
name="x1",
parameter_type=ParameterType.FLOAT,
lower=-5,
upper=10,
),
RangeParameter(
name="x2",
parameter_type=ParameterType.FLOAT,
lower=0,
upper=15,
),
]
objective=Objective(metric=BraninForMockJobMetric(name="branin"), minimize=True)
return Experiment(
name="branin_test_experiment",
search_space=SearchSpace(parameters=parameters),
optimization_config=OptimizationConfig(objective=objective),
runner=MockJobRunner(),
is_test=True, # Marking this experiment as a test experiment.
)
experiment = make_branin_experiment_with_runner_and_metric() | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
4. Setting up a `Scheduler` 4a. Subclassing `Scheduler`The base Ax `Scheduler` is abstract and must be subclassed, but only one method must be implemented on the subclass: `poll_trial_status`. | from typing import Dict, Set
from random import randint
from collections import defaultdict
from ax.service.scheduler import Scheduler, SchedulerOptions, TrialStatus
class MockJobQueueScheduler(Scheduler):
def poll_trial_status(self) -> Dict[TrialStatus, Set[int]]:
"""Queries the external system to compute a mapping from trial statuses
to a set indices of trials that are currently in that status. Only needs
to query for trials that are currently running but can query for all
trials too.
"""
status_dict = defaultdict(set)
for trial in self.running_trials: # `running_trials` is exposed on base `Scheduler`
mock_job_queue = get_mock_job_queue_client()
status = mock_job_queue.get_job_status(job_id=trial.run_metadata.get("job_id"))
status_dict[status].add(trial.index)
return status_dict | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
4B. Auto-selecting a generation strategyA `Scheduler` also requires an Ax `GenerationStrategy` specifying the algorithm to use for the optimization. Here we use the `choose_generation_strategy` utility that auto-picks a generation strategy based on the search space properties. To construct a custom generation strategy instead, refer to the ["Generation Strategy" tutorial](https://ax.dev/tutorials/generation_strategy.html).Importantly, a generation strategy in Ax limits allowed parallelism levels for each generation step it contains. If you would like the `Scheduler` to ensure parallelism limitations, set `max_examples` on each generation step in your generation strategy. | from ax.modelbridge.dispatch_utils import choose_generation_strategy
generation_strategy = choose_generation_strategy(
search_space=experiment.search_space,
max_parallelism_cap=3,
) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
Now we have all the components needed to start the scheduler: | scheduler = MockJobQueueScheduler(
experiment=experiment,
generation_strategy=generation_strategy,
options=SchedulerOptions(),
) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
5. Running the optimizationOnce the `Scheduler` instance is set up, user can execute `run_n_trials` as many times as needed, and each execution will add up to the specified `max_trials` trials to the experiment. The number of trials actually run might be less than `max_trials` if the optimization was concluded (e.g. there are no more points in the search space). | scheduler.run_n_trials(max_trials=3) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
We can examine `experiment` to see that it now has three trials: | from ax.service.utils.report_utils import exp_to_df
exp_to_df(experiment) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
Now we can run `run_n_trials` again to add three more trials to the experiment. | scheduler.run_n_trials(max_trials=3) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
Examiniming the experiment, we now see 6 trials, one of which is produced by Bayesian optimization (GPEI): | exp_to_df(experiment) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
For each call to `run_n_trials`, one can specify a timeout; if `run_n_trials` has been running for too long without finishing its `max_trials`, the operation will exit gracefully: | scheduler.run_n_trials(max_trials=3, timeout_hours=0.00001) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
6. Leveraging SQL storage and experiment resumptionWhen a scheduler is SQL-enabled, it will automatically save all updates it makes to the experiment in the course of the optimization. The experiment can then be resumed in the event of a crash or after a pause.To store state of optimization to an SQL backend, first follow [setup instructions](https://ax.dev/docs/storage.htmlsql) on Ax website. Having set up the SQL backend, pass `DBSettings` to the `Scheduler` on instantiation (note that SQLAlchemy dependency will have to be installed – for installation, refer to [optional dependencies](https://ax.dev/docs/installation.htmloptional-dependencies) on Ax website): | from ax.storage.sqa_store.structs import DBSettings
from ax.storage.sqa_store.db import init_engine_and_session_factory, get_engine, create_all_tables
from ax.storage.metric_registry import register_metric
from ax.storage.runner_registry import register_runner
register_metric(BraninForMockJobMetric)
register_runner(MockJobRunner)
# URL is of the form "dialect+driver://username:password@host:port/database".
# Instead of URL, can provide a `creator function`; can specify custom encoders/decoders if necessary.
db_settings = DBSettings(url="sqlite:///foo.db")
# The following lines are only necessary because it is the first time we are using this database
# in practice, you will not need to run these lines every time you initialize your scheduler
init_engine_and_session_factory(url=db_settings.url)
engine = get_engine()
create_all_tables(engine)
stored_experiment = make_branin_experiment_with_runner_and_metric()
generation_strategy = choose_generation_strategy(search_space=experiment.search_space)
scheduler_with_storage = MockJobQueueScheduler(
experiment=stored_experiment,
generation_strategy=generation_strategy,
options=SchedulerOptions(),
db_settings=db_settings,
)
stored_experiment.name | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
To resume a stored experiment: | reloaded_experiment_scheduler = MockJobQueueScheduler.from_stored_experiment(
experiment_name='branin_test_experiment',
options=SchedulerOptions(),
# `DBSettings` are also required here so scheduler has access to the
# database, from which it needs to load the experiment.
db_settings=db_settings,
) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
With the newly reloaded experiment, the `Scheduler` can continue the optimization: | reloaded_experiment_scheduler.run_n_trials(max_trials=3) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
7. Configuring the scheduler`Scheduler` exposes many options to configure the exact settings of the closed-loop optimization to perform. A few notable ones are:- `trial_type` –– currently only `Trial` and not `BatchTrial` is supported, but support for `BatchTrial`-s will follow,- `tolerated_trial_failure_rate` and `min_failed_trials_for_failure_rate_check` –– together these two settings control how the scheduler monitors the failure rate among trial runs it deploys. Once `min_failed_trials_for_failure_rate_check` is deployed, the scheduler will start checking whether the ratio of failed to total trials is greater than `tolerated_trial_failure_rate`, and if it is, scheduler will exit the optimization with a `FailureRateExceededError`,- `ttl_seconds_for_trials` –– sometimes a failure in a trial run means that it will be difficult to query its status (e.g. due to a crash). If this setting is specified, the Ax `Experiment` will automatically mark trials that have been running for too long (more than their 'time-to-live' (TTL) seconds) as failed,- `run_trials_in_batches` –– if `True`, the scheduler will attempt to run trials not by calling `Scheduler.run_trial` in a loop, but by calling `Scheduler.run_trials` on all ready-to-deploy trials at once. This could allow for saving compute in cases where the deployment operation has large overhead and deploying many trials at once saves compute. Note that using this option successfully will require your scheduler subclass to implement `MySchedulerSubclass.run_trials` and `MySchedulerSubclass.poll_available_capacity`.The rest of the options is described in the docstring below: | print(SchedulerOptions.__doc__) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
8. Advanced functionality 8a. Reporting results to an external systemThe `Scheduler` can report the optimization result to an external system each time there are new completed trials if the user-implemented subclass implements `MySchedulerSubclass.report_results` to do so. For example, the folliwing method:```class MySchedulerSubclass(Scheduler): ... def report_results(self): write_to_external_database(len(self.experiment.trials)) return (True, {}) Returns optimization success status and optional dict of outputs.```could be used to record number of trials in experiment so far in an external database.Since `report_results` is an instance method, it has access to `self.experiment` and `self.generation_strategy`, which contain all the information about the state of the optimization thus far. 8b. Using `run_trials_and_yield_results` generator methodIn some systems it's beneficial to have greater control over `Scheduler.run_n_trials` instead of just starting it and needing to wait for it to run all the way to completion before having access to its output. For this purpose, the `Scheduler` implements a generator method `run_trials_and_yield_results`, which yields the output of `Scheduler.report_results` each time there are new completed trials and can be used like so: | class ResultReportingScheduler(MockJobQueueScheduler):
def report_results(self):
return True, {
"trials so far": len(self.experiment.trials),
"currently producing trials from generation step": self.generation_strategy._curr.model_name,
"running trials": [t.index for t in self.running_trials],
}
experiment = make_branin_experiment_with_runner_and_metric()
scheduler = ResultReportingScheduler(
experiment=experiment,
generation_strategy=choose_generation_strategy(
search_space=experiment.search_space,
max_parallelism_cap=3,
),
options=SchedulerOptions(),
)
for reported_result in scheduler.run_trials_and_yield_results(max_trials=6):
print("Reported result: ", reported_result) | _____no_output_____ | MIT | tutorials/scheduler.ipynb | trsvchn/Ax |
Imports | import os
import multiprocessing
#import rasterio
import tensorflow as tf
from glob import glob
import pickle
import numpy as np
import os
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras import layers
import keras
import pandas as pd
#import matplotlib.pyplot as plt
tf.__version__ | _____no_output_____ | MIT | model-development/_archive/ml_explore_David_2.ipynb | Project-Canopy/SSRC_New_Model_Development |
Test with Simple CNN and Data Loader | import boto3
import rasterio as rio
import numpy as np
import io
from data_loader import DataLoader
batch_size = 20
gen = DataLoader(label_file_path_train="labels_test_v1.csv",
label_file_path_val="val_labels.csv",
label_mapping_path="labels.json",
bucket_name='canopy-production-ml',
data_extension_type='.tif',
training_data_shape=(100, 100, 18),
augment=True,
random_flip_up_down=False, #Randomly flips an image vertically (upside down). With a 1 in 2 chance, outputs the contents of `image` flipped along the first dimension, which is `height`.
random_flip_left_right=False,
flip_left_right=False,
flip_up_down=False,
rot90=True,
transpose=False,
enable_shuffle=False,
training_data_shuffle_buffer_size=10,
training_data_batch_size=batch_size,
training_data_type=tf.float32,
label_data_type=tf.uint8,
num_parallel_calls=int(2))
# TODO add data augmentation in DataLoader
# no_of_val_imgs = len(gen.validation_filenames)
# no_of_train_imgs = len(gen.training_filenames)
# print("Validation on {} images ".format(str(no_of_val_imgs)))
# print("Training on {} images ".format(str(no_of_train_imgs)))
gen
gen.class_weight
df["1"]
labels = {}
for column in df.columns:
col_count = df[column].value_counts()
# print("column:",column)
# print(col_count)
try:
col_count = df[column].value_counts()[1]
labels[column] = col_count
except:
print(f"Missing positive chips for class {column}. Class weight ")
labels
def Simple_CNN(numclasses, input_shape): #TODO use a more complex CNN
model = Sequential([
layers.Input(input_shape),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(numclasses)
])
return model
model_simpleCNN = Simple_CNN(10, input_shape=(100, 100, 18))
callbacks_list = []
model_simpleCNN.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam()) #TODO add callbacks to save checkpoints and maybe lr reducer,etc
epochs = 10
history = model_simpleCNN.fit(gen.training_dataset, validation_data=gen.validation_dataset, epochs=epochs)
s3 = boto3.resource('s3')
obj = s3.Object('canopy-production-ml', "chips/cloudfree-merge-polygons/split/test/100/100_1000_1000.tif")
obj_bytes = io.BytesIO(obj.get()['Body'].read())
with rasterio.open(obj_bytes) as src:
img_test = np.transpose(src.read(), (1, 2, 0))
print(img_test.shape)
label_list = ['Habitation', 'ISL', 'Industrial_agriculture', 'Mining',
'Rainforest', 'River', 'Roads', 'Savannah', 'Shifting_cultivation',
'Water'
]
# TODO Need to weight labels since they are pretty unbalanced (Rainforest is largely represented)
# s3 = boto3.resource('s3')
# obj = s3.Object('canopy-production-ml', "chips/cloudfree-merge-polygons/split/train/58/58_1300_1000.tif")
# obj_bytes = io.BytesIO(obj.get()['Body'].read())
# with rasterio.open(obj_bytes) as src:
# img_test = np.transpose(src.read(), (1, 2, 0)) / 255
# print(img_test.shape)
predictions = model_simpleCNN.predict(np.array([img_test]))
highest_score_predictions = np.argmax(predictions) # TODO: read more about multi classes PER IMAGE classification, what is the threshold?
print("This chip was predicted to belong to class {}".format(label_list[highest_score_predictions]))
print(predictions)
predictions.argsort()
model_simpleCNN.evaluate(gen.validation_dataset) | 3/3 [==============================] - 35s 10s/step - loss: 0.1902
| MIT | model-development/_archive/ml_explore_David_2.ipynb | Project-Canopy/SSRC_New_Model_Development |
Resnet50 | def Resnet50(numclasses, input_shape):
model = Sequential()
model.add(keras.applications.ResNet50(include_top=False, pooling='avg', weights=None, input_shape=input_shape))
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(2048, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(1024, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(numclasses, activation='softmax'))
model.layers[0].trainable = True
return model
model_resnet50 = Resnet50(10, input_shape=(100, 100, 18))
callbacks_list = []
model_resnet50.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam()) #TODO add callbacks to save checkpoints and maybe lr reducer, earlystop,etc
epochs = 10
history = model_resnet50.fit(gen.training_dataset, validation_data=gen.validation_dataset, epochs=epochs)
predictions = model_resnet50.predict(np.array([img_test]))
highest_score_predictions = np.argmax(predictions) # TODO: read more about multi classes PER IMAGE classification, what is the threshold?
print("This chip was predicted to belong to class {}".format(label_list[highest_score_predictions]))
model_resnet50.evaluate(gen.validation_dataset)
# https://kgptalkie.com/multi-label-image-classification-on-movies-poster-using-cnn/
top3 = np.argsort(predictions[0])[:-4:-1]
for i in range(3):
print(label_list[top3[i]]) # We need to define a threshold
def plot_learningCurve(history, epoch):
# Plot training & validation accuracy values
# plt.plot(history.history['accuracy'])
# plt.plot(history.history['val_accuracy'])
# plt.title('Model accuracy')
# plt.ylabel('Accuracy')
# plt.xlabel('Epoch')
# plt.legend(['Train', 'Val'], loc='upper left')
# plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
plot_learningCurve(history, 20)
def medium_CNN(numclasses, input_shape):
model = Sequential()
model.add(Conv2D(16, (3,3), activation='relu', input_shape = input_shape))
model.add(BatchNormalization())
model.add(MaxPool2D(2,2))
model.add(Dropout(0.3))
model.add(Conv2D(32, (3,3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(2,2))
model.add(Dropout(0.3))
model.add(Conv2D(64, (3,3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(2,2))
model.add(Dropout(0.4))
model.add(Conv2D(128, (3,3), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPool2D(2,2))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(numclasses, activation='sigmoid'))
return model
model_medium_CNN = medium_CNN(10, input_shape=(100, 100, 18))
callbacks_list = []
model_medium_CNN.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(),
metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name='accuracy')]) #TODO add callbacks to save checkpoints and maybe lr reducer, earlystop,etc
epochs = 10
history = model_medium_CNN.fit(gen.training_dataset, validation_data=gen.validation_dataset, epochs=epochs)
predictions = model_medium_CNN.predict(np.array([img_test]))
highest_score_predictions = np.argmax(predictions) # TODO: read more about multi classes PER IMAGE classification, what is the threshold?
print("This chip was predicted to belong to top 3 classes:")
top3 = np.argsort(predictions[0])[:-4:-1]
for i in range(3):
print(label_list[top3[i]])
model_medium_CNN.evaluate(gen.validation_dataset)
def plot_learningCurve(history, epoch):
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.show()
plot_learningCurve(history, 20) | _____no_output_____ | MIT | model-development/_archive/ml_explore_David_2.ipynb | Project-Canopy/SSRC_New_Model_Development |
TEST with bigearthnet-resnet50 | import tensorflow_hub as hub
IMAGE_SIZE = (100,100)
num_classes = 10
model_handle = "https://tfhub.dev/google/remote_sensing/bigearthnet-resnet50/1"
model_bigearthnet = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),
# reshape?
hub.KerasLayer(model_handle, trainable=False, input_shape=IMAGE_SIZE + (3,)),
# (model.layers[-1].output)
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(num_classes,
kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model_bigearthnet.build((None,)+IMAGE_SIZE+(4,))
model_bigearthnet.summary()
# model_bigearthnet.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
# optimizer=keras.optimizers.Adam()) #TODO add callbacks to save checkpoints and maybe lr reducer, earlystop,etc
# epochs = 10
# history = model_bigearthnet.fit(gen.training_dataset, validation_data=gen.validation_dataset, epochs=epochs)
# predictions = model_resnet50.predict(np.array([img_test]))
# highest_score_predictions = np.argmax(predictions) # TODO: read more about multi classes PER IMAGE classification, what is the threshold?
# print("This chip was predicted to belong to class {}".format(label_list[highest_score_predictions])) | _____no_output_____ | MIT | model-development/_archive/ml_explore_David_2.ipynb | Project-Canopy/SSRC_New_Model_Development |
Preproduction Candidate: ResNet50 pretrained on ImageNet | gen = DataLoader(label_file_path_train="labels_test_v1.csv", #or labels.csv
label_file_path_val="val_labels.csv",
bucket_name='canopy-production-ml',
data_extension_type='.tif',
training_data_shape=(100, 100, 18),
shuffle_and_repeat=False,
enable_just_shuffle=False,
enable_just_repeat=False,
training_data_shuffle_buffer_size=10,
data_repeat_count=None,
training_data_batch_size=20,
normalization_value=255.0, #normalization TODO double check other channels than RGB
training_data_type=tf.float32,
label_data_type=tf.uint8,
enable_data_prefetch=False,
data_prefetch_size=tf.data.experimental.AUTOTUNE,
num_parallel_calls=int(2))
# TODO add data augmentation in DataLoader
no_of_val_imgs = len(gen.validation_filenames)
no_of_train_imgs = len(gen.training_filenames)
print("Validation on {} images ".format(str(no_of_val_imgs)))
print("Training on {} images ".format(str(no_of_train_imgs)))
def define_model(numclasses,input_shape):
# parameters for CNN
input_tensor = Input(shape=input_shape)
# introduce a additional layer to get from bands to 3 input channels
input_tensor = Conv2D(3, (1, 1))(input_tensor)
base_model_resnet50 = keras.applications.ResNet50(include_top=False,
weights='imagenet',
input_shape=(100, 100, 3))
base_model = keras.applications.ResNet50(include_top=False,
weights=None,
input_tensor=input_tensor)
for i, layer in enumerate(base_model_resnet50.layers):
# we must skip input layer, which has no weights
if i == 0:
continue
base_model.layers[i+1].set_weights(layer.get_weights())
# add a global spatial average pooling layer
top_model = base_model.output
top_model = GlobalAveragePooling2D()(top_model)
# let's add a fully-connected layer
top_model = Dense(2048, activation='relu')(top_model)
top_model = Dense(2048, activation='relu')(top_model)
# and a logistic layer
predictions = Dense(numclasses, activation='softmax')(top_model)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
model.summary()
return model
random_id = 5555 #TODO
checkpoint_file = 'checkpoint_{}.h5'.format(random_id)
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath= checkpoint_file,
format='h5',
verbose=1,
save_weights_only=True,
monitor='val_loss',
mode='min',
save_best_only=True)
reducelronplateau = tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_loss', factor=0.1, patience=10, verbose=1,
mode='min', min_lr=1e-10)
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss',mode='min', patience=20, verbose=1)
callbacks_list = [model_checkpoint_callback, reducelronplateau, early_stop]
model = define_model(10, (100,100,18))
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(),
metrics=[tf.metrics.BinaryAccuracy(name='accuracy')]) #TODO add callbacks to save checkpoints and maybe lr reducer, earlystop,etc
epochs = 20
history = model.fit(gen.training_dataset, validation_data=gen.validation_dataset,
epochs=epochs,
callbacks=callbacks_list)
| Epoch 1/20
1/Unknown - 0s 0s/step - loss: 0.7377 - accuracy: 0.8900 | MIT | model-development/_archive/ml_explore_David_2.ipynb | Project-Canopy/SSRC_New_Model_Development |
Evaluation | model.evaluate(gen.validation_dataset) | 3/3 [==============================] - 9s 3s/step - loss: 0.6661 - accuracy: 0.9473
| MIT | model-development/_archive/ml_explore_David_2.ipynb | Project-Canopy/SSRC_New_Model_Development |
Sandbox | # Applying albumentations
help(gen.training_dataset)
# necessary imports
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# import tensorflow_datasets as tfds
from functools import partial
from albumentations import (
Compose, RandomBrightness, JpegCompression, HueSaturationValue, RandomContrast, HorizontalFlip,
Rotate
)
AUTOTUNE = tf.data.experimental.AUTOTUNE
s3 = boto3.resource('s3')
# TODO test on entire test dataset
obj = s3.Object('canopy-production-ml', "chips/cloudfree-merge-polygons/split/test/100/100_1000_1000.tif")
obj_bytes = io.BytesIO(obj.get()['Body'].read())
with rasterio.open(obj_bytes) as src:
img_test = np.transpose(src.read(), (1, 2, 0))
print(img_test.shape)
train_img = tf.image.convert_image_dtype(img_test,tf.float32)
train_img = tf.image.random_flip_left_right(train_img)
help(tf.image) | Help on package tensorflow._api.v2.image in tensorflow._api.v2:
NAME
tensorflow._api.v2.image - Image ops.
DESCRIPTION
The `tf.image` module contains various functions for image
processing and decoding-encoding Ops.
Many of the encoding/decoding functions are also available in the
core `tf.io` module.
## Image processing
### Resizing
The resizing Ops accept input images as tensors of several types. They always
output resized images as float32 tensors.
The convenience function `tf.image.resize` supports both 4-D
and 3-D tensors as input and output. 4-D tensors are for batches of images,
3-D tensors for individual images.
Resized images will be distorted if their original aspect ratio is not the
same as size. To avoid distortions see tf.image.resize_with_pad.
* `tf.image.resize`
* `tf.image.resize_with_pad`
* `tf.image.resize_with_crop_or_pad`
The Class `tf.image.ResizeMethod` provides various resize methods like
`bilinear`, `nearest_neighbor`.
### Converting Between Colorspaces
Image ops work either on individual images or on batches of images, depending on
the shape of their input Tensor.
If 3-D, the shape is `[height, width, channels]`, and the Tensor represents one
image. If 4-D, the shape is `[batch_size, height, width, channels]`, and the
Tensor represents `batch_size` images.
Currently, `channels` can usefully be 1, 2, 3, or 4. Single-channel images are
grayscale, images with 3 channels are encoded as either RGB or HSV. Images
with 2 or 4 channels include an alpha channel, which has to be stripped from the
image before passing the image to most image processing functions (and can be
re-attached later).
Internally, images are either stored in as one `float32` per channel per pixel
(implicitly, values are assumed to lie in `[0,1)`) or one `uint8` per channel
per pixel (values are assumed to lie in `[0,255]`).
TensorFlow can convert between images in RGB or HSV or YIQ.
* `tf.image.rgb_to_grayscale`, `tf.image.grayscale_to_rgb`
* `tf.image.rgb_to_hsv`, `tf.image.hsv_to_rgb`
* `tf.image.rgb_to_yiq`, `tf.image.yiq_to_rgb`
* `tf.image.rgb_to_yuv`, `tf.image.yuv_to_rgb`
* `tf.image.image_gradients`
* `tf.image.convert_image_dtype`
### Image Adjustments
TensorFlow provides functions to adjust images in various ways: brightness,
contrast, hue, and saturation. Each adjustment can be done with predefined
parameters or with random parameters picked from predefined intervals. Random
adjustments are often useful to expand a training set and reduce overfitting.
If several adjustments are chained it is advisable to minimize the number of
redundant conversions by first converting the images to the most natural data
type and representation.
* `tf.image.adjust_brightness`
* `tf.image.adjust_contrast`
* `tf.image.adjust_gamma`
* `tf.image.adjust_hue`
* `tf.image.adjust_jpeg_quality`
* `tf.image.adjust_saturation`
* `tf.image.random_brightness`
* `tf.image.random_contrast`
* `tf.image.random_hue`
* `tf.image.random_saturation`
* `tf.image.per_image_standardization`
### Working with Bounding Boxes
* `tf.image.draw_bounding_boxes`
* `tf.image.combined_non_max_suppression`
* `tf.image.generate_bounding_box_proposals`
* `tf.image.non_max_suppression`
* `tf.image.non_max_suppression_overlaps`
* `tf.image.non_max_suppression_padded`
* `tf.image.non_max_suppression_with_scores`
* `tf.image.pad_to_bounding_box`
* `tf.image.sample_distorted_bounding_box`
### Cropping
* `tf.image.central_crop`
* `tf.image.crop_and_resize`
* `tf.image.crop_to_bounding_box`
* `tf.io.decode_and_crop_jpeg`
* `tf.image.extract_glimpse`
* `tf.image.random_crop`
* `tf.image.resize_with_crop_or_pad`
### Flipping, Rotating and Transposing
* `tf.image.flip_left_right`
* `tf.image.flip_up_down`
* `tf.image.random_flip_left_right`
* `tf.image.random_flip_up_down`
* `tf.image.rot90`
* `tf.image.transpose`
## Image decoding and encoding
TensorFlow provides Ops to decode and encode JPEG and PNG formats. Encoded
images are represented by scalar string Tensors, decoded images by 3-D uint8
tensors of shape `[height, width, channels]`. (PNG also supports uint16.)
Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`
The encode and decode Ops apply to one image at a time. Their input and output
are all of variable size. If you need fixed size images, pass the output of
the decode Ops to one of the cropping and resizing Ops.
* `tf.io.decode_bmp`
* `tf.io.decode_gif`
* `tf.io.decode_image`
* `tf.io.decode_jpeg`
* `tf.io.decode_and_crop_jpeg`
* `tf.io.decode_png`
* `tf.io.encode_jpeg`
* `tf.io.encode_png`
PACKAGE CONTENTS
FILE
/Users/purgatorid/opt/anaconda3/envs/ml-conda/lib/python3.7/site-packages/tensorflow/_api/v2/image/__init__.py
| MIT | model-development/_archive/ml_explore_David_2.ipynb | Project-Canopy/SSRC_New_Model_Development |
David's area | import boto3
import rasterio as rio
import numpy as np
import io
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, GlobalAveragePooling2D, Dense
from tensorflow.keras.models import Model
import keras
from data_loader import DataLoader
batch_size = 20
gen = DataLoader(label_file_path_train="labels_1_4_train_v2.csv",
label_file_path_val="val_labels.csv",
bucket_name='canopy-production-ml',
data_extension_type='.tif',
training_data_shape=(100, 100, 18),
augment=True,
random_flip_up_down=False, #Randomly flips an image vertically (upside down). With a 1 in 2 chance, outputs the contents of `image` flipped along the first dimension, which is `height`.
random_flip_left_right=False,
flip_left_right=True,
flip_up_down=True,
rot90=False,
transpose=False,
enable_shuffle=True,
# training_data_shuffle_buffer_size=10,
training_data_batch_size=batch_size,
training_data_type=tf.float32,
label_data_type=tf.uint8,
enable_data_prefetch=True,
data_prefetch_size=tf.data.experimental.AUTOTUNE,
num_parallel_calls=int(2))
gen.class_weight
def define_model(numclasses,input_shape):
# parameters for CNN
input_tensor = Input(shape=input_shape)
# introduce a additional layer to get from bands to 3 input channels
input_tensor = Conv2D(3, (1, 1))(input_tensor)
base_model_resnet50 = keras.applications.ResNet50(include_top=False,
weights='imagenet',
input_shape=(100, 100, 3))
base_model = keras.applications.ResNet50(include_top=False,
weights=None,
input_tensor=input_tensor)
for i, layer in enumerate(base_model_resnet50.layers):
# we must skip input layer, which has no weights
if i == 0:
continue
base_model.layers[i+1].set_weights(layer.get_weights())
# add a global spatial average pooling layer
top_model = base_model.output
top_model = GlobalAveragePooling2D()(top_model)
# let's add a fully-connected layer
top_model = Dense(2048, activation='relu')(top_model)
top_model = Dense(2048, activation='relu')(top_model)
# and a logistic layer
predictions = Dense(numclasses, activation='softmax')(top_model)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
model.summary()
return model
import wandb
from wandb.keras import WandbCallback
wandb.init(project="canopy-first-model-testing", name="baseline")
help(wandb.init)
help(WandbCallback)
random_id = 5555 #TODO
checkpoint_file = 'checkpoint_{}.h5'.format(random_id)
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath= checkpoint_file,
format='h5',
verbose=1,
save_weights_only=True,
monitor='val_loss',
mode='min',
save_best_only=True)
reducelronplateau = tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_loss', factor=0.1, patience=10, verbose=1,
mode='min', min_lr=1e-10)
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss',mode='min', patience=20, verbose=1)
#labels = ["Habitation", "ISL", "Industrial_agriculture", "Mining", "Rainforest",
# "River", "Roads", "Savannah", "Shifting_cultivation", "Water"] # Cut out mining b/c not in training data
callbacks_list = [model_checkpoint_callback, reducelronplateau, early_stop]
#WandbCallback(monitor='accuracy', data_type="image", labels=labels)]
model = define_model(10, (100,100,18))
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(),
metrics=[tf.metrics.BinaryAccuracy(name='accuracy')]) #TODO add callbacks to save checkpoints and maybe lr reducer, earlystop,etc
weights = gen.class_weight
corrected_weights = {}
for i in range(10):
if i == 3:
corrected_weights[i] = 1
else:
corrected_weights[i] = weights[i]
corrected_weights
epochs = 20
history = model.fit(gen.training_dataset, validation_data=gen.validation_dataset,
epochs=epochs,
callbacks=callbacks_list,
class_weight=corrected_weights)
np.array(list(corrected_weights.values())).shape.rank
model = define_model(10, (100,100,18))
model.layers
model.layers[4]
bn = model.layers[4]
help(bn)
bn.name_scope()
bn.name
model.layers[9].name
model.layers[2].name
bns = []
for layer in model.layers:
if 'bn' in layer.name:
bns.append(layer)
bns
def define_model(numclasses, input_shape, freeze_bns=True):
# parameters for CNN
input_tensor = Input(shape=input_shape)
# introduce a additional layer to get from bands to 3 input channels
input_tensor = Conv2D(3, (1, 1))(input_tensor)
base_model_resnet50 = keras.applications.ResNet50(include_top=False,
weights='imagenet',
input_shape=(100, 100, 3))
base_model = keras.applications.ResNet50(include_top=False,
weights=None,
input_tensor=input_tensor)
for i, layer in enumerate(base_model_resnet50.layers):
# we must skip input layer, which has no weights
if i == 0:
continue
base_model.layers[i+1].set_weights(layer.get_weights())
if freeze_bns:
if 'bn' in layer.name:
base_model.layers[i+1].trainable = False
# add a global spatial average pooling layer
top_model = base_model.output
top_model = GlobalAveragePooling2D()(top_model)
# let's add a fully-connected layer
top_model = Dense(2048, activation='relu')(top_model)
top_model = Dense(2048, activation='relu')(top_model)
# and a logistic layer
predictions = Dense(numclasses, activation='softmax')(top_model)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
model.summary()
return model
model = define_model(10, (100,100,18))
model.layers[4].trainable
model.layers[3].trainable | _____no_output_____ | MIT | model-development/_archive/ml_explore_David_2.ipynb | Project-Canopy/SSRC_New_Model_Development |
Statistical Natural Language Processing in Python.orHow To Do Things With Words. And Counters.orEverything I Needed to Know About NLP I learned From Sesame Street.Except Kneser-Ney Smoothing.The Count Didn't Cover That. *One, two, three, ah, ah, ah!* — The Count (1) Data: Text and Words========Before we can do things with words, we need some words. First we need some *text*, possibly from a *file*. Then we can break the text into words. I happen to have a big text called [big.txt](file:///Users/pnorvig/Documents/ipynb/big.txt). We can read it, and see how big it is (in characters): | TEXT = file('big.txt').read()
len(TEXT) | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
So, six million characters.Now let's break the text up into words (or more formal-sounding, *tokens*). For now we'll ignore all the punctuation and numbers, and anything that is not a letter. | def tokens(text):
"List all the word tokens (consecutive letters) in a text. Normalize to lowercase."
return re.findall('[a-z]+', text.lower())
tokens('This is: A test, 1, 2, 3, this is.')
WORDS = tokens(TEXT)
len(WORDS) | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
So, a million words. Here are the first 10: | print(WORDS[:10]) | ['the', 'project', 'gutenberg', 'ebook', 'of', 'the', 'adventures', 'of', 'sherlock', 'holmes']
| MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
(2) Models: Bag of Words====The list `WORDS` is a list of the words in the `TEXT`, but it can also serve as a *generative model* of text. We know that language is very complicated, but we can create a simplified model of language that captures part of the complexity. In the *bag of words* model, we ignore the order of words, but maintain their frequency. Think of it this way: take all the words from the text, and throw them into a bag. Shake the bag, and then generating a sentence consists of pulling words out of the bag one at a time. Chances are it won't be grammatical or sensible, but it will have words in roughly the right proportions. Here's a function to sample an *n* word sentence from a bag of words: | def sample(bag, n=10):
"Sample a random n-word sentence from the model described by the bag of words."
return ' '.join(random.choice(bag) for _ in range(n))
sample(WORDS) | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
Another representation for a bag of words is a `Counter`, which is a dictionary of `{'word': count}` pairs. For example, | Counter(tokens('Is this a test? It is a test!')) | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
A `Counter` is like a `dict`, but with a few extra methods. Let's make a `Counter` for the big list of `WORDS` and get a feel for what's there: | COUNTS = Counter(WORDS)
print COUNTS.most_common(10)
for w in tokens('the rare and neverbeforeseen words'):
print COUNTS[w], w | 80029 the
83 rare
38312 and
0 neverbeforeseen
460 words
| MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
In 1935, linguist George Zipf noted that in any big text, the *n*th most frequent word appears with a frequency of about 1/*n* of the most frequent word. He get's credit for *Zipf's Law*, even though Felix Auerbach made the same observation in 1913. If we plot the frequency of words, most common first, on a log-log plot, they should come out as a straight line if Zipf's Law holds. Here we see that it is a fairly close fit: | M = COUNTS['the']
yscale('log'); xscale('log'); title('Frequency of n-th most frequent word and 1/n line.')
plot([c for (w, c) in COUNTS.most_common()])
plot([M/i for i in range(1, len(COUNTS)+1)]); | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
(3) Task: Spelling Correction========Given a word *w*, find the most likely correction *c* = `correct(`*w*`)`.**Approach:** Try all candidate words *c* that are known words that are near *w*. Choose the most likely one.How to balance *near* and *likely*?For now, in a trivial way: always prefer nearer, but when there is a tie on nearness, use the word with the highest `WORDS` count. Measure nearness by *edit distance*: the minimum number of deletions, transpositions, insertions, or replacements of characters. By trial and error, we determine that going out to edit distance 2 will give us reasonable results. Then we can define `correct(`*w*`)`: | def correct(word):
"Find the best spelling correction for this word."
# Prefer edit distance 0, then 1, then 2; otherwise default to word itself.
candidates = (known(edits0(word)) or
known(edits1(word)) or
known(edits2(word)) or
[word])
return max(candidates, key=COUNTS.get) | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
The functions `known` and `edits0` are easy; and `edits2` is easy if we assume we have `edits1`: | def known(words):
"Return the subset of words that are actually in the dictionary."
return {w for w in words if w in COUNTS}
def edits0(word):
"Return all strings that are zero edits away from word (i.e., just word itself)."
return {word}
def edits2(word):
"Return all strings that are two edits away from this word."
return {e2 for e1 in edits1(word) for e2 in edits1(e1)} | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
Now for `edits1(word)`: the set of candidate words that are one edit away. For example, given `"wird"`, this would include `"weird"` (inserting an `e`) and `"word"` (replacing a `i` with a `o`), and also `"iwrd"` (transposing `w` and `i`; then `known` can be used to filter this out of the set of final candidates). How could we get them? One way is to *split* the original word in all possible places, each split forming a *pair* of words, `(a, b)`, before and after the place, and at each place, either delete, transpose, replace, or insert a letter: pairs: Ø+wird w+ird wi+rd wir+dwird+ØNotes: (a, b) pair deletions: Ø+ird w+rd wi+d wir+ØDelete first char of b transpositions: Ø+iwrd w+rid wi+drSwap first two chars of b replacements: Ø+?ird w+?rd wi+?d wir+?Replace char at start of b insertions: Ø+?+wird w+?+ird wi+?+rd wir+?+d wird+?+ØInsert char between a and b | def edits1(word):
"Return all strings that are one edit away from this word."
pairs = splits(word)
deletes = [a+b[1:] for (a, b) in pairs if b]
transposes = [a+b[1]+b[0]+b[2:] for (a, b) in pairs if len(b) > 1]
replaces = [a+c+b[1:] for (a, b) in pairs for c in alphabet if b]
inserts = [a+c+b for (a, b) in pairs for c in alphabet]
return set(deletes + transposes + replaces + inserts)
def splits(word):
"Return a list of all possible (first, rest) pairs that comprise word."
return [(word[:i], word[i:])
for i in range(len(word)+1)]
alphabet = 'abcdefghijklmnopqrstuvwxyz'
splits('wird')
print edits0('wird')
print edits1('wird')
print len(edits2('wird'))
map(correct, tokens('Speling errurs in somethink. Whutever; unusuel misteakes everyware?')) | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
Can we make the output prettier than that? | def correct_text(text):
"Correct all the words within a text, returning the corrected text."
return re.sub('[a-zA-Z]+', correct_match, text)
def correct_match(match):
"Spell-correct word in match, and preserve proper upper/lower/title case."
word = match.group()
return case_of(word)(correct(word.lower()))
def case_of(text):
"Return the case-function appropriate for text: upper, lower, title, or just str."
return (str.upper if text.isupper() else
str.lower if text.islower() else
str.title if text.istitle() else
str)
map(case_of, ['UPPER', 'lower', 'Title', 'CamelCase'])
correct_text('Speling Errurs IN somethink. Whutever; unusuel misteakes?')
correct_text('Audiance sayzs: tumblr ...') | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
So far so good. You can probably think of a dozen ways to make this better. Here's one: in the text "three, too, one, blastoff!" we might want to correct "too" with "two", even though "too" is in the dictionary. We can do better if we look at a *sequence* of words, not just an individual word one at a time. But how can we choose the best corrections of a sequence? The ad-hoc approach worked pretty well for single words, but now we could use some real theory ... (4) Models: Word and Sequence Probabilities===If we have a bag of words, what's the probability of picking a particular word out of the bag? We'll denote that probability as $P(w)$. To create the function `P` that computes this probability, we define a function, `pdist`, that takes as input a `Counter` (that is, a bag of words) and returns a function that acts as a probability distribution over all possible words. In a probability distribution the probability of each word is between 0 and 1, and the sum of the probabilities is 1. | def pdist(counter):
"Make a probability distribution, given evidence from a Counter."
N = sum(counter.values())
return lambda x: counter[x]/N
Pword = pdist(COUNTS)
# Print probabilities of some words
for w in tokens('"The" is most common word in English'):
print Pword(w), w | 0.0724106075672 the
0.00884356018896 is
0.000821562579453 most
0.000259678921039 common
0.000269631771671 word
0.0199482270806 in
0.000190913771217 english
| MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
Now, what is the probability of a *sequence* of words? Use the definition of a joint probability:$P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times P(w_3 \mid w_1 w_2) \ldots \times \ldots P(w_n \mid w_1 \ldots w_{n-1})$In the bag of words model, each word is drawn from the bag *independently* of the others. So $P(w_2 \mid w_1) = P(w_2)$, and we have: $P(w_1 \ldots w_n) = P(w_1) \times P(w_2) \times P(w_3) \ldots \times \ldots P(w_n)$ Now clearly this model is wrong; the probability of a sequence depends on the order of the words. But, as the statistician George Box said, *All models are wrong, but some are useful.* The bag of words model, wrong as it is, has many useful applications. How can we compute $P(w_1 \ldots w_n)$? We'll use a different function name, `Pwords`, rather than `P`, and we compute the product of the individual probabilities: | def Pwords(words):
"Probability of words, assuming each word is independent of others."
return product(Pword(w) for w in words)
def product(nums):
"Multiply the numbers together. (Like `sum`, but with multiplication.)"
result = 1
for x in nums:
result *= x
return result
tests = ['this is a test',
'this is a unusual test',
'this is a neverbeforeseen test']
for test in tests:
print Pwords(tokens(test)), test | 2.98419543271e-11 this is a test
8.64036404331e-16 this is a unusual test
0.0 this is a neverbeforeseen test
| MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
Yikes—it seems wrong to give a probability of 0 to the last one; it should just be very small. We'll come back to that later. The other probabilities seem reasonable. (5) Task: Word Segmentation====**Task**: *given a sequence of characters with no spaces separating words, recover the sequence of words.* Why? Languages with no word delimiters: [不带空格的词](http://translate.google.com/auto/en/%E4%B8%8D%E5%B8%A6%E7%A9%BA%E6%A0%BC%E7%9A%84%E8%AF%8D)In English, sub-genres with no word delimiters ([spelling errors](https://www.google.com/search?q=wordstogether), [URLs](http://speedofart.com)).**Approach 1:** Enumerate all candidate segementations and choose the one with highest PwordsProblem: how many segmentations are there for an *n*-character text?**Approach 2:** Make one segmentation, into a first word and remaining characters. If we assume words are independent then we can maximize the probability of the first word adjoined to the best segmentation of the remaining characters. assert segment('choosespain') == ['choose', 'spain'] segment('choosespain') == max(Pwords(['c'] + segment('hoosespain')), Pwords(['ch'] + segment('oosespain')), Pwords(['cho'] + segment('osespain')), Pwords(['choo'] + segment('sespain')), ... Pwords(['choosespain'] + segment(''))) To make this somewhat efficient, we need to avoid re-computing the segmentations of the remaining characters. This can be done explicitly by *dynamic programming* or implicitly with *memoization*. Also, we shouldn't consider all possible lengths for the first word; we can impose a maximum length. What should it be? A little more than the longest word seen so far. | def memo(f):
"Memoize function f, whose args must all be hashable."
cache = {}
def fmemo(*args):
if args not in cache:
cache[args] = f(*args)
return cache[args]
fmemo.cache = cache
return fmemo
max(len(w) for w in COUNTS)
def splits(text, start=0, L=20):
"Return a list of all (first, rest) pairs; start <= len(first) <= L."
return [(text[:i], text[i:])
for i in range(start, min(len(text), L)+1)]
print splits('word')
print splits('reallylongtext', 1, 4)
@memo
def segment(text):
"Return a list of words that is the most probable segmentation of text."
if not text:
return []
else:
candidates = ([first] + segment(rest)
for (first, rest) in splits(text, 1))
return max(candidates, key=Pwords)
segment('choosespain')
segment('speedofart')
decl = ('wheninthecourseofhumaneventsitbecomesnecessaryforonepeople' +
'todissolvethepoliticalbandswhichhaveconnectedthemwithanother' +
'andtoassumeamongthepowersoftheearththeseparateandequalstation' +
'towhichthelawsofnatureandofnaturesgodentitlethem')
print(' '.join(segment(decl)))
Pwords(segment(decl))
Pwords(segment(decl+decl))
Pwords(segment(decl+decl+decl)) | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
That's a problem. We'll come back to it later. | segment('smallandinsignificant')
segment('largeandinsignificant')
print(Pwords(['large', 'and', 'insignificant']))
print(Pwords(['large', 'and', 'in', 'significant'])) | 4.1121373609e-10
1.06638804821e-11
| MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
Summary: - Looks pretty good!- The bag-of-words assumption is a limitation.- Recomputing Pwords on each recursive call is somewhat inefficient.- Numeric underflow for texts longer than 100 or so words; we'll need to use logarithms, or other tricks. (6) Data: Mo' Data, Mo' Better Let's move up from millions to *billions and billions* of words. Once we have that amount of data, we can start to look at two word sequences, without them being too sparse. I happen to have data files available in the format of `"word \t count"`, and bigram data in the form of `"word1 word2 \t count"`. Let's arrange to read them in: | def load_counts(filename, sep='\t'):
"""Return a Counter initialized from key-value pairs,
one on each line of filename."""
C = Counter()
for line in open(filename):
key, count = line.split(sep)
C[key] = int(count)
return C
COUNTS1 = load_counts('count_1w.txt')
COUNTS2 = load_counts('count_2w.txt')
P1w = pdist(COUNTS1)
P2w = pdist(COUNTS2)
print len(COUNTS1), sum(COUNTS1.values())/1e9
print len(COUNTS2), sum(COUNTS2.values())/1e9
COUNTS2.most_common(30) | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
(7) Theory and Practice: Segmentation With Bigram Data===A less-wrong approximation: $P(w_1 \ldots w_n) = P(w_1 \mid start) \times P(w_2 \mid w_1) \times P(w_3 \mid w_2) \ldots \times \ldots P(w_n \mid w_{n-1})$This is called the *bigram* model, and is equivalent to taking a text, cutting it up into slips of paper with two words on them, and having multiple bags, and putting each slip into a bag labelled with the first word on the slip. Then, to generate language, we choose the first word from the original single bag of words, and chose all subsequent words from the bag with the label of the previously-chosen word. To determine the probability of a word sequence, we multiply together the conditional probabilities of each word given the previous word. We'll do this with a function, `cPword` for "conditional probability of a word."$P(w_n \mid w_{n-1}) = P(w_{n-1}w_n) / P(w_{n-1}) $ | def Pwords2(words, prev='<S>'):
"The probability of a sequence of words, using bigram data, given prev word."
return product(cPword(w, (prev if (i == 0) else words[i-1]) )
for (i, w) in enumerate(words))
P = P1w # Use the big dictionary for the probability of a word
def cPword(word, prev):
"Conditional probability of word, given previous word."
bigram = prev + ' ' + word
if P2w(bigram) > 0 and P(prev) > 0:
return P2w(bigram) / P(prev)
else: # Average the back-off value and zero.
return P(word) / 2
print Pwords(tokens('this is a test'))
print Pwords2(tokens('this is a test'))
print Pwords2(tokens('is test a this')) | 2.98419543271e-11
6.41367629438e-08
1.18028600367e-11
| MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
To make `segment2`, we copy `segment`, and make sure to pass around the previous token, and to evaluate probabilities with `Pwords2` instead of `Pwords`. | @memo
def segment2(text, prev='<S>'):
"Return best segmentation of text; use bigram data."
if not text:
return []
else:
candidates = ([first] + segment2(rest, first)
for (first, rest) in splits(text, 1))
return max(candidates, key=lambda words: Pwords2(words, prev))
print segment2('choosespain')
print segment2('speedofart')
print segment2('smallandinsignificant')
print segment2('largeandinsignificant')
adams = ('faroutintheunchartedbackwatersoftheunfashionableendofthewesternspiral' +
'armofthegalaxyliesasmallunregardedyellowsun')
print segment(adams)
print segment2(adams)
P1w('unregarded')
tolkein = 'adrybaresandyholewithnothinginittositdownonortoeat'
print segment(tolkein)
print segment2(tolkein) | ['a', 'dry', 'bare', 'sandy', 'hole', 'with', 'nothing', 'in', 'it', 'to', 'sit', 'down', 'on', 'or', 'to', 'eat']
['a', 'dry', 'bare', 'sandy', 'hole', 'with', 'nothing', 'in', 'it', 'to', 'sit', 'down', 'on', 'or', 'to', 'eat']
| MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
Conclusion? Bigram model is a little better, but not much. Hundreds of billions of words still not enough. (Why not trillions?) Could be made more efficient. (8) Theory: Evaluation===So far, we've got an intuitive feel for how this all works. But we don't have any solid metrics that quantify the results. Without metrics, we can't say if we are doing well, nor if a change is an improvement. In general,when developing a program that relies on data to help makepredictions, it is good practice to divide your data into three sets: Training set: the data used to create our spelling model; this was the big.txt file. Development set: a set of input/output pairs that we can use to rank the performance of our program as we are developing it. Test set: another set of input/output pairs that we use to rank our program after we are done developing it. The development set can't be used for this purpose—once the programmer has looked at the development test it is tainted, because the programmer might modify the program just to pass the development test. That's why we need a separate test set that is only looked at after development is done.For this program, the training data is the word frequency counts, the development set is the examples like `"choosespain"` that we have been playing with, and now we need a test set. | def test_segmenter(segmenter, tests):
"Try segmenter on tests; report failures; return fraction correct."
return sum([test_one_segment(segmenter, test)
for test in tests]), len(tests)
def test_one_segment(segmenter, test):
words = tokens(test)
result = segmenter(cat(words))
correct = (result == words)
if not correct:
print 'expected', words
print ' got', result
return correct
cat = ''.join
proverbs = ("""A little knowledge is a dangerous thing
A man who is his own lawyer has a fool for his client
All work and no play makes Jack a dull boy
Better to remain silent and be thought a fool that to speak and remove all doubt;
Do unto others as you would have them do to you
Early to bed and early to rise, makes a man healthy, wealthy and wise
Fools rush in where angels fear to tread
Genius is one percent inspiration, ninety-nine percent perspiration
If you lie down with dogs, you will get up with fleas
Lightning never strikes twice in the same place
Power corrupts; absolute power corrupts absolutely
Here today, gone tomorrow
See no evil, hear no evil, speak no evil
Sticks and stones may break my bones, but words will never hurt me
Take care of the pence and the pounds will take care of themselves
Take care of the sense and the sounds will take care of themselves
The bigger they are, the harder they fall
The grass is always greener on the other side of the fence
The more things change, the more they stay the same
Those who do not learn from history are doomed to repeat it"""
.splitlines())
test_segmenter(segment, proverbs)
test_segmenter(segment2, proverbs) | _____no_output_____ | MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
This confirms that both segmenters are very good, and that `segment2` is slightly better. There is much more that can be done in terms of the variety of tests, and in measuring statistical significance. (9) Theory and Practice: Smoothing======Let's go back to a test we did before, and add some more test cases: | tests = ['this is a test',
'this is a unusual test',
'this is a nongovernmental test',
'this is a neverbeforeseen test',
'this is a zqbhjhsyefvvjqc test']
for test in tests:
print Pwords(tokens(test)), test | 2.98419543271e-11 this is a test
8.64036404331e-16 this is a unusual test
0.0 this is a nongovernmental test
0.0 this is a neverbeforeseen test
0.0 this is a zqbhjhsyefvvjqc test
| MIT | ipynb/How to Do Things with Words.ipynb | mikiec84/pytudes |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.