markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
The issue here is the finality of a probability of zero. Out of the three 15-letter words, it turns out that "nongovernmental" is in the dictionary, but if it hadn't been, if somehow our corpus of words had missed it, then the probability of that whole phrase would have been zero. It seems that is too strict; there must be some "real" words that are not in our dictionary, so we shouldn't give them probability zero. There is also a question of likelyhood of being a "real" word. It does seem that "neverbeforeseen" is more English-like than "zqbhjhsyefvvjqc", and so perhaps should have a higher probability.We can address this by assigning a non-zero probability to words that are not in the dictionary. This is even more important when it comes to multi-word phrases (such as bigrams), because it is more likely that a legitimate one will appear that has not been observed before.We can think of our model as being overly spiky; it has a spike of probability mass wherever a word or phrase occurs in the corpus. What we would like to do is *smooth* over those spikes so that we get a model that does not depend on the details of our corpus. The process of "fixing" the model is called *smoothing*.For example, Laplace was asked what's the probability of the sun rising tomorrow. From data that it has risen $n/n$ times for the last *n* days, the maximum liklihood estimator is 1. But Laplace wanted to balance the data with the possibility that tomorrow, either it will rise or it won't, so he came up with $(n + 1) / (n + 2)$.  What we know is little, and what we are ignorant of is immense.— Pierre Simon Laplace, 1749-1827
def pdist_additive_smoothed(counter, c=1): """The probability of word, given evidence from the counter. Add c to the count for each item, plus the 'unknown' item.""" N = sum(counter.values()) # Amount of evidence Nplus = N + c * (len(counter) + 1) # Evidence plus fake observations return lambda word: (counter[word] + c) / Nplus P1w = pdist_additive_smoothed(COUNTS1) P1w('neverbeforeseen')
_____no_output_____
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
But now there's a problem ... we now have previously-unseen words with non-zero probabilities. And maybe 10-12 is about right for words that are observed in text: that is, if I'm *reading* a new text, the probability that the next word is unknown might be around 10-12. But if I'm *manufacturing* 20-letter sequences at random, the probability that one will be a word is much, much lower than 10-12. Look what happens:
segment.cache.clear() segment('thisisatestofsegmentationofalongsequenceofwords')
_____no_output_____
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
There are two problems: First, we don't have a clear model of the unknown words. We just say "unknown" butwe don't distinguish likely unknown from unlikely unknown. For example, is a 8-character unknown more likely than a 20-character unknown?Second, we don't take into account evidence from *parts* of the unknown. For example, "unglobulate" versus "zxfkogultae".For our next approach, *Good - Turing* smoothing re-estimates the probability of zero-count words, based on the probability of one-count words (and can also re-estimate for higher-number counts, but that is less interesting).I. J. Good (1916 - 2009)             Alan Turing (1812 - 1954)So, how many one-count words are there in `COUNTS`? (There aren't any in `COUNTS1`.) And what are the word lengths of them? Let's find out:
singletons = (w for w in COUNTS if COUNTS[w] == 1) lengths = map(len, singletons) Counter(lengths).most_common() 1357 / sum(COUNTS.values()) hist(lengths, bins=len(set(lengths))); def pdist_good_turing_hack(counter, onecounter, base=1/26., prior=1e-8): """The probability of word, given evidence from the counter. For unknown words, look at the one-counts from onecounter, based on length. This gets ideas from Good-Turing, but doesn't implement all of it. prior is an additional factor to make unknowns less likely. base is how much we attenuate probability for each letter beyond longest.""" N = sum(counter.values()) N2 = sum(onecounter.values()) lengths = map(len, [w for w in onecounter if onecounter[w] == 1]) ones = Counter(lengths) longest = max(ones) return (lambda word: counter[word] / N if (word in counter) else prior * (ones[len(word)] / N2 or ones[longest] / N2 * base ** (len(word)-longest))) # Redefine P1w P1w = pdist_good_turing_hack(COUNTS1, COUNTS) segment.cache.clear() segment('thisisatestofsegmentationofaverylongsequenceofwords')
_____no_output_____
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
That was somewhat unsatisfactory. We really had to crank up the prior, specifically because the process of running `segment` generates so many non-word candidates (and also because there will be fewer unknowns with respect to the billion-word `WORDS1` than with respect to the million-word `WORDS`). It would be better to separate out the prior from the word distribution, so that the same distribution could be used for multiple tasks, not just for this one.Now let's think for a short while about smoothing **bigram** counts. Specifically, what if we haven't seen a bigram sequence, but we've seen both words individually? For example, to evaluate P("Greenland") in the phrase "turn left at Greenland", we might have three pieces of evidence: P("Greenland") P("Greenland" | "at") P("Greenland" | "left", "at") Presumably, the first would have a relatively large count, and thus large reliability, while the second and third would have decreasing counts and reliability. With *interpolation smoothing* we combine all three pieces of evidence, with a linear combination: $P(w_3 \mid w_1w_2) = c_1 P(w_3) + c_2 P(w_3 \mid w_2) + c_3 P(w_3 \mid w_1w_2)$How do we choose $c_1, c_2, c_3$? By experiment: train on training data, maximize $c$ values on development data, then evaluate on test data. However, when we do this, we are saying, with probability $c_1$, that a word can appear anywhere, regardless of previous words. But some words are more free to do that than other words. Consider two words with similar probability:
print P1w('francisco') print P1w('individuals')
7.73314623661e-05 7.72494966889e-05
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
They have similar unigram probabilities but differ in their freedom to be the second word of a bigram:
print [bigram for bigram in COUNTS2 if bigram.endswith('francisco')] print [bigram for bigram in COUNTS2 if bigram.endswith('individuals')]
['are individuals', 'other individuals', 'on individuals', 'infected individuals', 'in individuals', 'where individuals', 'or individuals', 'which individuals', 'to individuals', 'both individuals', 'help individuals', 'more individuals', 'interested individuals', 'from individuals', '<S> individuals', 'income individuals', 'these individuals', 'about individuals', 'the individuals', 'among individuals', 'some individuals', 'those individuals', 'by individuals', 'minded individuals', 'These individuals', 'qualified individuals', 'certain individuals', 'different individuals', 'For individuals', 'few individuals', 'and individuals', 'two individuals', 'for individuals', 'between individuals', 'affected individuals', 'healthy individuals', 'private individuals', 'with individuals', 'following individuals', 'as individuals', 'such individuals', 'that individuals', 'all individuals', 'of individuals', 'many individuals']
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
Intuitively, words that appear in many bigrams before are more likely to appear in a new, previously unseen bigram. In *Kneser-Ney* smoothing (Reinhard Kneser, Hermann Ney) we multiply the bigram counts by this ratio. But I won't implement that here, because The Count never covered it.(10) One More Task: Secret Codes===Let's tackle one more task: decoding secret codes. We'll start with the simplest of codes, a rotation cipher, sometimes called a shift cipher or a Caesar cipher (because this was state-of-the-art crypotgraphy in 100 BC). First, a method to encode:
def rot(msg, n=13): "Encode a message with a rotation (Caesar) cipher." return encode(msg, alphabet[n:]+alphabet[:n]) def encode(msg, key): "Encode a message with a substitution cipher." table = string.maketrans(upperlower(alphabet), upperlower(key)) return msg.translate(table) def upperlower(text): return text.upper() + text.lower() rot('This is a secret message.', 1) rot('This is a secret message.', 13) rot(rot('This is a secret message.'))
_____no_output_____
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
Now decoding is easy: try all 26 candidates, and find the one with the maximum Pwords:
def decode_rot(secret): "Decode a secret message that has been encoded with a rotation cipher." candidates = [rot(secret, i) for i in range(len(alphabet))] return max(candidates, key=lambda msg: Pwords(tokens(msg))) msg = 'Who knows the answer?' secret = rot(msg, 17) print(secret) print(decode_rot(secret))
Nyf befnj kyv rejnvi? Who knows the answer?
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
Let's make it a tiny bit harder. When the secret message contains separate words, it is too easy to decode by guessing that the one-letter words are most likely "I" or "a". So what if the encode routine mushed all the letters together:
def encode(msg, key): "Encode a message with a substitution cipher; remove non-letters." msg = cat(tokens(msg)) ## Change here table = string.maketrans(upperlower(alphabet), upperlower(key)) return msg.translate(table)
_____no_output_____
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
Now we can decode by segmenting. We change candidates to be a list of segmentations, and still choose the candidate with the best Pwords:
def decode_rot(secret): """Decode a secret message that has been encoded with a rotation cipher, and which has had all the non-letters squeezed out.""" candidates = [segment(rot(secret, i)) for i in range(len(alphabet))] return max(candidates, key=lambda msg: Pwords(msg)) msg = 'Who knows the answer this time? Anyone? Bueller?' secret = rot(msg, 19) print(secret) print(decode_rot(secret)) candidates = [segment(rot(secret, i)) for i in range(len(alphabet))] for c in candidates: print int(log10(Pwords(c))), ' '.join(c)
-123 p ah d g h p l max t g l p x km a bl mb f x t gr h g x un x e ex k -133 q b i eh i q m n by u hm q y l n b cm n c g y u h s i h y vo y ff y l -128 r c j f i jr no c z v in r z mo c d nod h z v it j i z w p z g g z m -115 sd k g j k so p da w j os an p de op e i a w j u k j ax q ah h an -144 t el h k l t p q e b x k p t b o q e f p q f j b x k v l k by r b ii b o -145 u f m il m u q r f cy l qu c p r f g q r g k cy l wm l c z s c j j c p -141 v g n j m n v r sg d z mr v d q sg h r s h l d z m x nm dat d k k d q -39 who knows the answer this time anyone bu e ll er -119 xi p l op x tu if b o t x f s u i j tu j n f b oz p of c v f mm f s -151 y j q m p q y u v j g c p u y g t v j ku v k o g c pa q pg d w g n n g t -151 z k r n q r z v w k h d q v z h u w k l v w l p h d q br q he x ho oh u -97 also r saw x lie r w a iv x l m w x m q i er c s r if y i pp iv -146 b mt p st b x y m j f s x b j w y m n x y n r j f sd t s j g z j q q j w -137 c n u q tu cy z n k g ty c k x z no y z os k g t eut k ha k r r k x -124 do v r u v d z a o l h u z d ly a op z apt l h u f v u li bl s sly -121 e p w s v we ab pm iv a em z b p q ab qu m iv g w v m j cm t tm z -142 f q x t w x f bc q n j w b fn a c q r bc r v n j wh x w n k d n u un a -119 gr y u x y g c dr ok x c go b dr s c d s w ok xi y x o leo v vo b -119 h s z vy z h des ply d h p c est de t x ply j z y pm f p w w p c -137 it a w z a i e ft q m ze i q d f tu e f u y q m z ka z q n g q xx q d -130 j u b x ab j f g u r na f jr eg u v f g v z r n alba r oh r y y re -125 k v cy bc k g h v sob g k s f h v w g h was o b m c b s p is z z s f -130 l w d z c d l hi w t p ch l t g i w x h ix b t p c nd c t q j t a at g -123 m x e a dem i j x u q dim u h j x y i j y c u q doe du r ku b bu h -124 ny f be fn j k y v re j n vi k y z j k z d v rep fe vs l v c c vi -134 oz g cf go k l z w s f k o w j l z ak la e w s f q g f w tm w d d w j
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
What about a general substitution cipher? The problem is that there are 26! substitution ciphers, and we can't enumerate all of them. We would need to search through this space. Initially make some guess at a substitution, then swap two letters; if that looks better keep going, if not try something else. This approach solves most substitution cipher problems, although it can take a few minutes on a message of length 100 words or so.(∞ and beyond) Where To Go Next===What to do next? Here are some options: - **Spelling correction**: Use bigram or trigram context; make a model of spelling errors/edit distance; go beyond edit distance 2; make it more efficient- **Evaluation**: Make a serious test suite; search for best parameters (e.g. $c_1, c_2, c_3$)- **Smoothing**: Implement Kneser-Ney and/or Interpolation; do letter *n*-gram-based smoothing- **Secret Codes**: Implement a search over substitution ciphers- **Classification**: Given a corpus of texts, each with a classification label, write a classifier that will take a new text and return a label. Examples: spam/no-spam; favorable/unfavorable; what author am I most like; reading level.- **Clustering**: Group data by similarity. Find synonyms/related words.- **Parsing**: Representing nested structures rather than linear sequences of words. relations between parts of the structure. Implicit missing bits. Inducing a grammar.- **Meaning**: What semantic relations are meant by the syntactic relations?- **Translation**: Using examples to transform one language into another.- **Question Answering**: Using examples to transfer a question into an answer, either by retrieving a passage, or by synthesizing one.- **Speech**: Dealing with analog audio signals rather than discrete sequences of characters.
correct('cpgratulations')
_____no_output_____
MIT
ipynb/How to Do Things with Words.ipynb
mikiec84/pytudes
Experimenting with smaller action spacesHere we look at the board (states) that all evolve from an initial board. We're going to train on all board snapshots of a series of heuristically played games.
def new_initial_state(heuristics): board = GomokuBoard(heuristics, N=15, disp_width=8) policy = HeuristicGomokuPolicy(style = 2, bias=.5, topn=5, threat_search=ThreatSearch(2,2)) board.set(H,8).set('G',6).set(G,8).set(F,8).set(H,9).set(H,10) return board, policy new_initial_state(heuristics)[0].display('current') board, policy = new_initial_state(heuristics) _ = roll_out(board, policy, 40) board.display('current') stones = board.stones.copy() from wgomoku import heuristic_QF, transform, create_sample, wrap_sample sample = create_sample(board.stones.copy(), board.N, 1-board.current_color) o,d = np.rollaxis(sample, 2, 0) 2 * d + o def create_samples_and_qvalues(board, heuristics): all_stones_t = [transform(board.stones.copy(), board.N, rot, ref) for rot in range(4) for ref in [False, True]] samples = [] qvalues = [] for stones_t in all_stones_t: sample = create_sample(stones_t, board.N, 1-board.current_color) board = GomokuBoard(heuristics=heuristics, N=board.N, stones=stones_t) policy = HeuristicGomokuPolicy(style = STYLE_MIXED, bias=.5, topn=5, threat_search=ThreatSearch(2,2)) qvalue, default_value = heuristic_QF(board, policy) qvalue = wrap_sample(qvalue, default_value) samples.append(sample) qvalues.append(qvalue) return np.array(samples), np.reshape(qvalues, [8, board.N+2, board.N+2, 1]) samples, values = create_samples_and_qvalues(board, heuristics) np.shape(samples),np.shape(values) b, w = np.rollaxis(samples[3], 2, 0) b + 2*w
_____no_output_____
Apache-2.0
GomokuData.ipynb
smurve/DeepGomoku
Plausibility check
index = np.argmax(values[3]) r, c = np.divmod(index,17) pos=gt.m2b((r, c), 17) print(r,c,pos, values[3][r][c]) from wgomoku import create_samples_and_qvalues def data_from_game(board, policy, heuristics): """ Careful: This function rolls back the board """ s,v = create_samples_and_qvalues(board, heuristics) while board.cursor > 6: board.undo() s1, v1 = create_samples_and_qvalues(board, heuristics) s = np.concatenate((s,s1)) v = np.concatenate((v,v1)) return s,v from wgomoku import data_from_game data = data_from_game(board, policy, heuristics) o,d = np.rollaxis(data[0][30], 2, 0) 2 * d + o
_____no_output_____
Apache-2.0
GomokuData.ipynb
smurve/DeepGomoku
Yet another plausibility check
sample_no = 30 index = np.argmax(data[1][sample_no]) r, c = np.divmod(index,17) pos=gt.m2b((r, c), 17) print(r,c,pos) print("Value of the suggested pos:") print(data[1][sample_no][r][c]) print("Value to the right of the suggested pos:") print(data[1][sample_no][r][c+1])
_____no_output_____
Apache-2.0
GomokuData.ipynb
smurve/DeepGomoku
Save the results for later training
np.save("samples.npy", data[0]) np.save("values.npy", data[1]) samples = np.load("samples.npy") values = np.load("values.npy") samples.shape, values.shape np.rollaxis(samples[0], 2, 0) np.rollaxis(values[0], 2, 0) np.rollaxis(samples[0], 2, 0)[0] + 2*np.rollaxis(samples[0], 2, 0)[1]
_____no_output_____
Apache-2.0
GomokuData.ipynb
smurve/DeepGomoku
Randomized Benchmarking IntroductionOne of the main challenges in building a quantum information processor is the non-scalability of completelycharacterizing the noise affecting a quantum system via process tomography. In addition, process tomography is sensitive to noise in the pre- and post rotation gates plus the measurements (SPAM errors). Gateset tomography can take these errors into account, but the scaling is even worse. A complete characterizationof the noise is useful because it allows for the determination of good error-correction schemes, and thusthe possibility of reliable transmission of quantum information.Since complete process tomography is infeasible for large systems, there is growing interest in scalablemethods for partially characterizing the noise affecting a quantum system. A scalable (in the number $n$ of qubits comprising the system) and robust algorithm for benchmarking the full set of Clifford gates by a single parameter using randomization techniques was presented in [1]. The concept of using randomization methods for benchmarking quantum gates is commonly called **Randomized Benchmarking(RB)**. References1. Easwar Magesan, J. M. Gambetta, and Joseph Emerson, *Robust randomized benchmarking of quantum processes*,https://arxiv.org/pdf/1009.36392. Easwar Magesan, Jay M. Gambetta, and Joseph Emerson, *Characterizing Quantum Gates via Randomized Benchmarking*,https://arxiv.org/pdf/1109.68873. A. D. C'orcoles, Jay M. Gambetta, Jerry M. Chow, John A. Smolin, Matthew Ware, J. D. Strand, B. L. T. Plourde, and M. Steffen, *Process verification of two-qubit quantum gates by randomized benchmarking*, https://arxiv.org/pdf/1210.70114. Jay M. Gambetta, A. D. CΒ΄orcoles, S. T. Merkel, B. R. Johnson, John A. Smolin, Jerry M. Chow,Colm A. Ryan, Chad Rigetti, S. Poletto, Thomas A. Ohki, Mark B. Ketchen, and M. Steffen,*Characterization of addressability by simultaneous randomized benchmarking*, https://arxiv.org/pdf/1204.63085. David C. McKay, Sarah Sheldon, John A. Smolin, Jerry M. Chow, and Jay M. Gambetta, *Three Qubit Randomized Benchmarking*, https://arxiv.org/pdf/1712.06550 The Randomized Benchmarking ProtocolA RB protocol (see [1,2]) consists of the following steps:(We should first import the relevant qiskit classes for the demonstration).
#Import general libraries (needed for functions) import numpy as np import matplotlib.pyplot as plt from IPython import display #Import the RB Functions import qiskit.ignis.verification.randomized_benchmarking as rb #Import Qiskit classes import qiskit from qiskit.providers.aer.noise import NoiseModel from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
_____no_output_____
Apache-2.0
content/ch-quantum-hardware/randomized-benchmarking.ipynb
NunoEdgarGFlowHub/qiskit-textbook
Step 1: Generate RB sequencesThe RB sequences consist of random Clifford elements chosen uniformly from the Clifford group on $n$-qubits, including a computed reversal element,that should return the qubits to the initial state.More precisely, for each length $m$, we choose $K_m$ RB sequences. Each such sequence contains $m$ random elements $C_{i_j}$ chosen uniformly from the Clifford group on $n$-qubits, and the $m+1$ element is defined as follows: $C_{i_{m+1}} = (C_{i_1}\cdot ... \cdot C_{i_m})^{-1}$. It can be found efficiently by the Gottesmann-Knill theorem.For example, we generate below several sequences of 2-qubit Clifford circuits.
#Generate RB circuits (2Q RB) #number of qubits nQ=2 rb_opts = {} #Number of Cliffords in the sequence rb_opts['length_vector'] = [1, 10, 20, 50, 75, 100, 125, 150, 175, 200] #Number of seeds (random sequences) rb_opts['nseeds'] = 5 #Default pattern rb_opts['rb_pattern'] = [[0,1]] rb_circs, xdata = rb.randomized_benchmarking_seq(**rb_opts)
Making the n=2 Clifford Table
Apache-2.0
content/ch-quantum-hardware/randomized-benchmarking.ipynb
NunoEdgarGFlowHub/qiskit-textbook
As an example, we print the circuit corresponding to the first RB sequence
print(rb_circs[0][0])
β”Œβ”€β”€β”€β”β”Œβ”€β”€β”€β”β”Œβ”€β”€β”€β” β–‘ β”Œβ”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β” β”Œβ”€β” Β» qr_0: |0>──────────────■─── H β”œβ”€ S β”œβ”€ Z β”œβ”€β–‘β”€β”€ Z β”œβ”€ Sdg β”œβ”€ H β”œβ”€β”€β– β”€β”€β”€β”€β”€β”€β”€β”€Mβ”œβ”€β”€β”€β”€β”€Β» β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”β”Œβ”€β”΄β”€β”β”œβ”€β”€β”€β”€β”œβ”€β”€β”€β”€β”œβ”€β”€β”€β”€ β–‘ β”œβ”€β”€β”€β”€β”œβ”€β”€β”€β”€β”€β”€β”œβ”€β”€β”€β”€β”Œβ”€β”΄β”€β”β”Œβ”€β”€β”€β”β””β•₯β”˜β”Œβ”€β”€β”€β”Β» qr_1: |0>─ Sdg β”œβ”€ H β”œβ”€ X β”œβ”€ H β”œβ”€ S β”œβ”€ Z β”œβ”€β–‘β”€β”€ Z β”œβ”€ Sdg β”œβ”€ H β”œβ”€ X β”œβ”€ H β”œβ”€β•«β”€β”€ S β”œΒ» β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”˜β””β”€β”€β”€β”˜β””β”€β”€β”€β”˜β””β”€β”€β”€β”˜β””β”€β”€β”€β”˜ β–‘ β””β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”˜β””β”€β”€β”€β”˜β””β”€β”€β”€β”˜ β•‘ β””β”€β”€β”€β”˜Β» cr_0: 0 ═══════════════════════════════════════════════════════════════╩══════» Β» cr_1: 0 ══════════════════════════════════════════════════════════════════════» Β» Β« Β«qr_0: ─── Β« β”Œβ”€β” Β«qr_1: ─Mβ”œ Β« β””β•₯β”˜ Β«cr_0: ═╬═ Β« β•‘ Β«cr_1: ═╩═ Β«
Apache-2.0
content/ch-quantum-hardware/randomized-benchmarking.ipynb
NunoEdgarGFlowHub/qiskit-textbook
One can verify that the Unitary representing each RB circuit should be the identity (with a global phase). We simulate this using Aer unitary simulator.
# Create a new circuit without the measurement qregs = rb_circs[0][-1].qregs cregs = rb_circs[0][-1].cregs qc = qiskit.QuantumCircuit(*qregs, *cregs) for i in rb_circs[0][-1][0:-nQ]: qc.data.append(i) # The Unitary is an identity (with a global phase) backend = qiskit.Aer.get_backend('unitary_simulator') basis_gates = ['u1','u2','u3','cx'] # use U,CX for now job = qiskit.execute(qc, backend=backend, basis_gates=basis_gates) print(np.around(job.result().get_unitary(),3))
[[ 0.+1.j -0.+0.j -0.+0.j -0.+0.j] [-0.+0.j 0.+1.j -0.-0.j 0.+0.j] [ 0.-0.j 0.+0.j 0.+1.j -0.+0.j] [-0.-0.j 0.-0.j 0.-0.j 0.+1.j]]
Apache-2.0
content/ch-quantum-hardware/randomized-benchmarking.ipynb
NunoEdgarGFlowHub/qiskit-textbook
Step 2: Execute the RB sequences (with some noise)We can execute the RB sequences either using Qiskit Aer Simulator (with some noise model) or using IBMQ provider, and obtain a list of results.By assumption each operation $C_{i_j}$ is allowed to have some error, represented by $\Lambda_{i_j,j}$, and each sequence can be modeled by the operation:$$\textit{S}_{\textbf{i}_\textbf{m}} = \bigcirc_{j=1}^{m+1} (\Lambda_{i_j,j} \circ C_{i_j})$$where ${\textbf{i}_\textbf{m}} = (i_1,...,i_m)$ and $i_{m+1}$ is uniquely determined by ${\textbf{i}_\textbf{m}}$.
# Run on a noisy simulator noise_model = NoiseModel() # Depolarizing_error dp = 0.005 noise_model.add_all_qubit_quantum_error(depolarizing_error(dp, 1), ['u1', 'u2', 'u3']) noise_model.add_all_qubit_quantum_error(depolarizing_error(2*dp, 2), 'cx') backend = qiskit.Aer.get_backend('qasm_simulator')
_____no_output_____
Apache-2.0
content/ch-quantum-hardware/randomized-benchmarking.ipynb
NunoEdgarGFlowHub/qiskit-textbook
Step 3: Get statistics about the survival probabilitiesFor each of the $K_m$ sequences the survival probability $Tr[E_\psi \textit{S}_{\textbf{i}_\textbf{m}}(\rho_\psi)]$is measured. Here $\rho_\psi$ is the initial state taking into account preparation errors and $E_\psi$ is thePOVM element that takes into account measurement errors.In the ideal (noise-free) case $\rho_\psi = E_\psi = | \psi {\rangle} {\langle} \psi |$. In practice one can measure the probability to go back to the exact initial state, i.e. all the qubits in the ground state $ {|} 00...0 {\rangle}$ or just the probability for one of the qubits to return back to the ground state. Measuring the qubits independently can be more convenient if a correlated measurement scheme is not possible. Both measurements will fit to the same decay parameter according to the properties of the *twirl*. Step 4: Find the averaged sequence fidelityAverage over the $K_m$ random realizations of the sequence to find the averaged sequence **fidelity**,$$F_{seq}(m,|\psi{\rangle}) = Tr[E_\psi \textit{S}_{K_m}(\rho_\psi)]$$where $$\textit{S}_{K_m} = \frac{1}{K_m} \sum_{\textbf{i}_\textbf{m}} \textit{S}_{\textbf{i}_\textbf{m}}$$is the average sequence operation. Step 5: Fit the resultsRepeat Steps 1 through 4 for different values of $m$ and fit the results for the averaged sequence fidelity to the model:$$ \textit{F}_{seq}^{(0)} \big(m,{|}\psi {\rangle} \big) = A_0 \alpha^m +B_0$$where $A_0$ and $B_0$ absorb state preparation and measurement errors as well as an edge effect from theerror on the final gate.$\alpha$ determines the average error-rate $r$, which is also called **Error per Clifford (EPC)** according to the relation $$ r = 1-\alpha-\frac{1-\alpha}{2^n} = \frac{2^n-1}{2^n}(1-\alpha)$$(where $n=nQ$ is the number of qubits).As an example, we calculate the average sequence fidelity for each of the RB sequences, fit the results to the exponential curve, and compute the parameters $\alpha$ and EPC.
# Create the RB fitter backend = qiskit.Aer.get_backend('qasm_simulator') basis_gates = ['u1','u2','u3','cx'] shots = 200 qobj_list = [] rb_fit = rb.RBFitter(None, xdata, rb_opts['rb_pattern']) for rb_seed,rb_circ_seed in enumerate(rb_circs): print('Compiling seed %d'%rb_seed) new_rb_circ_seed = qiskit.compiler.transpile(rb_circ_seed, basis_gates=basis_gates) qobj = qiskit.compiler.assemble(new_rb_circ_seed, shots=shots) print('Simulating seed %d'%rb_seed) job = backend.run(qobj, noise_model=noise_model, backend_options={'max_parallel_experiments': 0}) qobj_list.append(qobj) # Add data to the fitter rb_fit.add_data(job.result()) print('After seed %d, alpha: %f, EPC: %f'%(rb_seed,rb_fit.fit[0]['params'][1], rb_fit.fit[0]['epc']))
Compiling seed 0
Apache-2.0
content/ch-quantum-hardware/randomized-benchmarking.ipynb
NunoEdgarGFlowHub/qiskit-textbook
Plot the results
plt.figure(figsize=(8, 6)) ax = plt.subplot(1, 1, 1) # Plot the essence by calling plot_rb_data rb_fit.plot_rb_data(0, ax=ax, add_label=True, show_plt=False) # Add title and label ax.set_title('%d Qubit RB'%(nQ), fontsize=18) plt.show()
_____no_output_____
Apache-2.0
content/ch-quantum-hardware/randomized-benchmarking.ipynb
NunoEdgarGFlowHub/qiskit-textbook
The intuition behind RBThe depolarizing quantum channel has a parameter $\alpha$, and works like this: with probability $\alpha$, the state remains the same as before; with probability $1-\alpha$, the state becomes the totally mixed state, namely:$$\rho_f = \alpha \rho_i + \frac{1-\alpha}{2^n} * \mathbf{I}$$Suppose that we have a sequence of $m$ gates, not necessarily Clifford gates, where the error channel of the gates is a depolarizing channel with parameter $\alpha$ (same $\alpha$ for all the gates). Then with probability $\alpha^m$ the state is correct at the end of the sequence, and with probability $1-\alpha^m$ it becomes the totally mixed state, therefore:$$\rho_f^m = \alpha^m \rho_i + \frac{1-\alpha^m}{2^n} * \mathbf{I}$$Now suppose that in addition we start with the ground state; that the entire sequence amounts to the identity; and that we measure the state at the end of the sequence with the standard basis. We derive that the probability of success at the end of the sequence is:$$\alpha^m + \frac{1-\alpha^m}{2^n} = \frac{2^n-1}{2^n}\alpha^m + \frac{1}{2^n} = A_0\alpha^m + B_0$$It follows that the probability of success, aka fidelity, decays exponentially with the sequence length, with exponent $\alpha$.The last statement is not necessarily true when the channel is other than the depolarizing channel. However, it turns out that if the gates are uniformly-randomized Clifford gates, then the noise of each gate behaves on average as if it was the depolarizing channel, with some parameter that can be computed from the channel, and we obtain the exponential decay of the fidelity.Formally, taking an average over a finite group $G$ (like the Clifford group) of a quantum channel $\bar \Lambda$ is also called a *twirl*:$$ W_G(\bar \Lambda) \frac{1}{|G|} \sum_{u \in G} U^{\dagger} \circ \bar \Lambda \circ U$$Twirling over the entire unitary group yields exactly the same result as the Clifford group. The Clifford group is a *2-design* of the unitary group. Simultaneous Randomized BenchmarkingRB is designed to address fidelities in multiqubit systems in two ways. For one, RB over the full $n$-qubit spacecan be performed by constructing sequences from the $n$-qubit Clifford group. Additionally, the $n$-qubit spacecan be subdivided into sets of qubits $\{n_i\}$ and $n_i$-qubit RB performed in each subset simultaneously [4]. Both methods give metrics of fidelity in the $n$-qubit space. For example, it is common to perform 2Q RB on the subset of two-qubits defining a CNOT gate while the other qubits are quiescent. As explained in [4], this RB data will not necessarily decay exponentially because the other qubit subspaces are not twirled. Subsets are more rigorously characterized by simultaneous RB, which also measures some level of crosstalk error since all qubits are active.An example of simultaneous RB (1Q RB and 2Q RB) can be found in: https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/ignis/randomized_benchmarking.ipynb Predicted Gate FidelityIf we know the errors on the underlying gates (the gateset) we can predict the fidelity. First we need to count the number of these gates per Clifford. Then, the two qubit Clifford gate error function gives the error per 2Q Clifford. It assumes that the error in the underlying gates is depolarizing. This function is derived in the supplement to [5].
#Count the number of single and 2Q gates in the 2Q Cliffords gates_per_cliff = rb.rb_utils.gates_per_clifford(qobj_list, xdata[0],basis_gates, rb_opts['rb_pattern'][0]) for i in range(len(basis_gates)): print("Number of %s gates per Clifford: %f"%(basis_gates[i], np.mean([gates_per_cliff[0][i],gates_per_cliff[1][i]]))) # Prepare lists of the number of qubits and the errors ngates = np.zeros(7) ngates[0:3] = gates_per_cliff[0][0:3] ngates[3:6] = gates_per_cliff[1][0:3] ngates[6] = gates_per_cliff[0][3] gate_qubits = np.array([0, 0, 0, 1, 1, 1, -1], dtype=int) gate_errs = np.zeros(len(gate_qubits)) gate_errs[[1, 4]] = dp/2 #convert from depolarizing error to epg (1Q) gate_errs[[2, 5]] = 2*dp/2 #convert from depolarizing error to epg (1Q) gate_errs[6] = dp*3/4 #convert from depolarizing error to epg (2Q) #Calculate the predicted epc pred_epc = rb.rb_utils.twoQ_clifford_error(ngates,gate_qubits,gate_errs) print("Predicted 2Q Error per Clifford: %e"%pred_epc) import qiskit qiskit.__qiskit_version__
_____no_output_____
Apache-2.0
content/ch-quantum-hardware/randomized-benchmarking.ipynb
NunoEdgarGFlowHub/qiskit-textbook
Homework for week 4 Functions 1. Convert String Number to IntegerCreate a function that can take something like ```"$5,356"``` or ```"$250,000"``` or even ```"$10.75"``` and covert into integers like return ```5356```, ```250000``` and ```11``` respectively.
## build function here ## call the function on "$5,356" ## Did you get 5356? ## call the function on "$10.75" ## Did you get 11? ## Use map() to run your function on the following list ## save result in a list called updated_list my_amounts = ["$12.24", "$4,576.33", "$23,629.01"]
_____no_output_____
MIT
homework/homework-for-week-4-BLANKS.ipynb
jchapamalacara/fall21-students-practical-python
3. What if we encounter a list that has a mix of numbers as strings and integers and floats?
## Run this cell more_amounts = [960, "$12.24", "$4,576.33", "$23,629.01", 23.656] ## tweaked the function here to handle all situations ## run it on more_amounts using map()
_____no_output_____
MIT
homework/homework-for-week-4-BLANKS.ipynb
jchapamalacara/fall21-students-practical-python
Compare the outer envelope stellar mass and SDSS redMaPPer clusters DSigma profiles of HSC massive galaxies
# DeltaSigma profiles of HSC massive galaxies topn_massive = pickle.load(open(os.path.join(res_dir, 'topn_galaxies_sum.pkl'), 'rb')) # DeltaSigma profiles of redMaPPer and CAMIRA clusters topn_cluster = pickle.load(open(os.path.join(res_dir, 'topn_clusters_cen_sum.pkl'), 'rb')) # For clusters, but using both central and satellite galaxies topn_cluster_all = pickle.load(open(os.path.join(res_dir, 'topn_clusters_sum.pkl'), 'rb'))
_____no_output_____
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
DSigma profiles of mock galaxies
sim_dsig = Table.read(os.path.join(sim_dir, 'sim_merge_all_dsig.fits'))
_____no_output_____
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
Halo mass distributions
sim_mhalo = Table.read(os.path.join(sim_dir, 'sim_merge_mhalo_hist.fits'))
_____no_output_____
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
Pre-compute lensing results for HSC galaxies
# Pre-compute s16a_precompute = os.path.join(data_dir, 'topn_public_s16a_medium_precompute.hdf5') hsc_pre = Table.read(s16a_precompute, path='hsc_extra') red_sdss = Table.read(s16a_precompute, path='redm_sdss') red_hsc = Table.read(s16a_precompute, path='redm_hsc')
_____no_output_____
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
Pre-compute lensing results for randoms
# Lensing data using medium photo-z quality cut s16a_lensing = os.path.join(data_dir, 's16a_weak_lensing_medium.hdf5') # Random s16a_rand = Table.read(s16a_lensing, path='random')
_____no_output_____
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
Pre-defined number density bins
topn_bins = Table.read(os.path.join(bin_dir, 'topn_bins.fits'))
_____no_output_____
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
Compute the DSigma profiles of SDSS redMaPPer clusters- 0.2 < z < 0.38 clusters
plt.scatter(red_sdss['z_best'], red_sdss['lambda_cluster_redm'], s=5, alpha=0.3) mask_sdss_1 = (red_sdss['z_best'] >= 0.19) & (red_sdss['z_best'] <= 0.35) print(mask_sdss_1.sum()) sdss_bin_1 = Table(copy.deepcopy(topn_bins[0])) sdss_bin_1['n_obj'] = mask_sdss_1.sum() sdss_bin_1['rank_low'] = 1 sdss_bin_1['rank_upp'] = mask_sdss_1.sum() sdss_bin_1['index_low'] = 0 sdss_bin_1['index_upp'] = mask_sdss_1.sum() - 1 mask_sdss_2 = (red_sdss['z_best'] >= 0.19) & (red_sdss['z_best'] <= 0.50) & (red_sdss['lambda_cluster_redm'] >= 50) print(mask_sdss_2.sum()) sdss_bin_2 = Table(copy.deepcopy(topn_bins[0])) sdss_bin_2['n_obj'] = mask_sdss_2.sum() sdss_bin_2['rank_low'] = 1 sdss_bin_2['rank_upp'] = mask_sdss_2.sum() sdss_bin_2['index_low'] = 0 sdss_bin_2['index_upp'] = mask_sdss_2.sum() - 1 redm_sdss_dsig_1 = wlensing.gather_topn_dsigma_profiles( red_sdss, s16a_rand, sdss_bin_1, 'lambda_cluster_redm', mask=mask_sdss_1, n_rand=100000, n_boot=200, verbose=True) redm_sdss_sum_1 = scatter.compare_model_dsigma( redm_sdss_dsig_1, sim_dsig, model_err=False, poly=True, verbose=True) fig = visual.sum_plot_topn( redm_sdss_sum_1, r'$\lambda_{\rm SDSS}$', note=None, cov_type='jk', ref_tab=None) redm_sdss_dsig_2 = wlensing.gather_topn_dsigma_profiles( red_sdss, s16a_rand, sdss_bin_2, 'lambda_cluster_redm', mask=mask_sdss_2, n_rand=100000, n_boot=200, verbose=True) redm_sdss_sum_2 = scatter.compare_model_dsigma( redm_sdss_dsig_2, sim_dsig, model_err=False, poly=True, verbose=True) fig = visual.sum_plot_topn( redm_sdss_sum_2, r'$\lambda_{\rm SDSS}$', note=None, cov_type='jk', ref_tab=None)
# Using column: lambda_cluster_redm # Bin 1: 0 - 54 # Dealing with Bin: 1
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
Compute the DSigma profiles of HSC redMaPPer clusters
plt.scatter(red_hsc['z_best'], red_hsc['lambda'], s=5, alpha=0.3) mask_hsc_1 = (red_hsc['z_best'] >= 0.19) & (red_hsc['z_best'] <= 0.35) redm_hsc_dsig_1 = wlensing.gather_topn_dsigma_profiles( red_hsc, s16a_rand, sdss_bin_1, 'lambda', mask=mask_hsc_1, n_rand=100000, n_boot=200, verbose=True) print(np.min(redm_hsc_dsig['samples'])) redm_hsc_sum_1 = scatter.compare_model_dsigma( redm_hsc_dsig_1, sim_dsig, model_err=False, poly=True, verbose=True) fig = visual.sum_plot_topn( redm_hsc_sum_1, r'$\lambda_{\rm HSC}$', note=None, cov_type='jk', ref_tab=None) mask_hsc_2 = (red_hsc['z_best'] >= 0.19) & (red_hsc['z_best'] <= 0.5) redm_hsc_dsig_2 = wlensing.gather_topn_dsigma_profiles( red_hsc, s16a_rand, sdss_bin_2, 'lambda', mask=mask_hsc_2, n_rand=100000, n_boot=200, verbose=True) print(np.min(redm_hsc_dsig['samples'])) redm_hsc_sum_2 = scatter.compare_model_dsigma( redm_hsc_dsig_2, sim_dsig, model_err=False, poly=True, verbose=True) fig = visual.sum_plot_topn( redm_hsc_sum_2, r'$\lambda_{\rm HSC}$', note=None, cov_type='jk', ref_tab=None)
# Using column: lambda # Bin 1: 0 - 54 14.115065 # Dealing with Bin: 1
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
Compute the DSigma profiles of HSC massive galaxies
# S18A bright star mask bsm_s18a = hsc_pre['flag'] > 0 # General mask for HSC galaxies mask_mout_1 = ( (hsc_pre['c82_100'] <= 18.) & (hsc_pre['logm_100'] - hsc_pre['logm_50'] <= 0.2) & (hsc_pre['logm_50_100'] > 0) & bsm_s18a & (hsc_pre['z'] >= 0.19) & (hsc_pre['z'] <= 0.35) ) mask_mout_2 = ( (hsc_pre['c82_100'] <= 18.) & (hsc_pre['logm_100'] - hsc_pre['logm_50'] <= 0.2) & (hsc_pre['logm_50_100'] > 0) & bsm_s18a & (hsc_pre['z'] >= 0.19) & (hsc_pre['z'] <= 0.50) ) # Mask to select "central" galaxies cen_mask_1 = hsc_pre['cen_mask_1'] > 0 cen_mask_2 = hsc_pre['cen_mask_2'] > 0 cen_mask_3 = hsc_pre['cen_mask_3'] > 0 hsc_mout_dsig_1 = wlensing.gather_topn_dsigma_profiles( hsc_pre, s16a_rand, sdss_bin_1, 'logm_50_100', mask=mask_mout_1, n_rand=100000, n_boot=200, verbose=True) hsc_mout_sum_1 = scatter.compare_model_dsigma( hsc_mout_dsig_1, sim_dsig, model_err=False, poly=True, verbose=True) fig = visual.sum_plot_topn( hsc_mout_sum_1, r'$M_{\star, [50,100]}$', note=None, cov_type='jk', ref_tab=None) hsc_mout_dsig_2 = wlensing.gather_topn_dsigma_profiles( hsc_pre, s16a_rand, sdss_bin_2, 'logm_50_100', mask=mask_mout_2, n_rand=100000, n_boot=200, verbose=True) hsc_mout_sum_2 = scatter.compare_model_dsigma( hsc_mout_dsig_2, sim_dsig, model_err=False, poly=True, verbose=True) fig = visual.sum_plot_topn( hsc_mout_sum_2, r'$M_{\star, [50,100]}$', note=None, cov_type='jk', ref_tab=None)
# Using column: logm_50_100 # Bin 1: 0 - 54 # Dealing with Bin: 1
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
Making a figure for the paper
def compare_sdss_redm_profiles( dsig_ref, dsig_cmp, sim_dsig, sig_type='bt', compare_to_model=True, label_ref=r'$\rm Ref$', label_cmp=r'$\rm Test$', sub_ref=r'{\rm Ref}', sub_cmp=r'{\rm Test}', cmap_list=None, color_bins=None, marker_ref='o', msize_ref=180, marker_cmp='P', msize_cmp=180, show_best_cmp=False, middle_title=None, mvir_min=None, dsig_cmp_2=None, label_cmp_2=r'$=rm Test 2$'): """Compare the Dsigma profiles.""" def get_dsig_ratio(obs, ref, mod=None): """""" obs_rand = np.random.normal( loc=obs['dsigma'][0], scale=obs['dsig_err_{:s}'.format(sig_type)][0]) if mod is not None: ref_rand = np.random.normal( loc=mod['dsig'], scale=(mod['dsig_err'] * err_factor)) ref_inter = 10.0 ** ( interpolate.interp1d( mod['r_mpc'], np.log10(ref_rand), fill_value='extrapolate')(r_mpc_obs) ) return obs_rand / ref_inter else: ref_rand = np.random.normal( loc=ref['dsigma'][0], scale=obs['dsig_err_{:s}'.format(sig_type)][0]) return obs_rand / ref_rand # Color maps and bins if cmap_list is None: cmap_list = [ palettable.colorbrewer.sequential.OrRd_7_r, palettable.colorbrewer.sequential.Blues_7_r, palettable.colorbrewer.sequential.YlGn_7_r, palettable.colorbrewer.sequential.Purples_7_r] if color_bins is None: color_bins = ["#e41a1c", "#377eb8", "#1b9e77", "#984ea3"] # Radius bin of the observed DSigma profiles r_mpc_obs = dsig_ref[0].meta['r_mpc'] # ---- Start the figure ---- # # Setup the figure n_col, n_bins = 2, len(dsig_ref) fig_y = int(4.2 * n_bins + 2) left, right = 0.11, 0.995 if n_bins == 4: bottom, top = 0.055, 0.96 elif n_bins == 3: bottom, top = 0.08, 0.96 elif n_bins == 2: bottom, top = 0.10, 0.93 elif n_bins == 1: bottom, top = 0.16, 0.92 x_space = 0.12 x_size = (right - left - x_space * 1.05) / n_col y_size = (top - bottom) / n_bins fig = plt.figure(figsize=(12, fig_y)) for idx, dsig in enumerate(dsig_ref): # Setup the three columns ax1 = fig.add_axes([left, top - y_size * (idx + 1), x_size, y_size]) ax2 = fig.add_axes([left + x_space + x_size, top - y_size * (idx + 1), x_size, y_size]) # Subplot title if idx == 0: ax1.set_title(r'$R \times \Delta\Sigma\ \rm Profile$', fontsize=35, pad=10) if middle_title is None: ax2.set_title(r'${\rm Richness\ v.s.}\ M_{\star,\ \rm Outer}$', fontsize=35, pad=10) else: ax2.set_title(middle_title, fontsize=35, pad=10) # Color map cmap, color = cmap_list[idx], color_bins[idx] # MDPL halo mass information for this bin sim_dsig_bin = sim_dsig[sim_dsig['bin'] == 0] # DSigma result for this bin dsig_ref_bin = dsig_ref[idx] dsig_cmp_bin = dsig_cmp[idx] # Best fit DSigma profiles dsig_ref_best = sim_dsig_bin[ np.argmin( np.abs(sim_dsig_bin['scatter'] - dsig_ref_bin['sig_med_{:s}'.format(sig_type)]))] dsig_cmp_best = sim_dsig_bin[ np.argmin( np.abs(sim_dsig_bin['scatter'] - dsig_cmp_bin['sig_med_{:s}'.format(sig_type)]))] if dsig_ref_bin['sig_med_{:s}'.format(sig_type)] < 0.6: err_factor = 4. else: err_factor = 3. # Interpolated the reference model profile ref_model_inter = 10.0 ** ( interpolate.interp1d( dsig_ref_best['r_mpc'], np.log10(dsig_ref_best['dsig']), fill_value='extrapolate')(r_mpc_obs) ) if compare_to_model: ratio_sample = [ get_dsig_ratio( dsig_cmp_bin, dsig_ref_bin, mod=dsig_ref_best) for i in np.arange(2000)] ratio_cmp = dsig_cmp_bin['dsigma'][0] / ref_model_inter else: ratio_sample = [ get_dsig_ratio(dsig_cmp_bin, dsig_ref_bin, mod=None) for i in np.arange(2000)] ratio_cmp = dsig_cmp_bin['dsigma'][0] / dsig_ref_bin['dsigma'][0] ratio_cmp_err_low = ratio_cmp - np.nanpercentile(ratio_sample, 16, axis=0) ratio_cmp_err_upp = np.nanpercentile(ratio_sample, 84, axis=0) - ratio_cmp if dsig_cmp_2 is not None: try: dsig_cmp_2_bin = dsig_cmp_2[idx] dsig_cmp_2_best = sim_dsig_bin[ np.argmin( np.abs(sim_dsig_bin['scatter'] - dsig_cmp_2_bin['sig_med_{:s}'.format(sig_type)]))] if compare_to_model: ratio_sample = [ get_dsig_ratio( dsig_cmp_2_bin, dsig_ref_bin, mod=dsig_ref_best) for i in np.arange(2000)] ratio_cmp_2 = dsig_cmp_2_bin['dsigma'][0] / ref_model_inter else: ratio_sample = [ get_dsig_ratio(dsig_cmp_2_bin, dsig_ref_bin, mod=None) for i in np.arange(2000)] ratio_cmp_2 = dsig_cmp_2_bin['dsigma'][0] / dsig_ref_bin['dsigma'][0] ratio_cmp_2_err_low = ratio_cmp_2 - np.nanpercentile(ratio_sample, 16, axis=0) ratio_cmp_2_err_upp = np.nanpercentile(ratio_sample, 84, axis=0) - ratio_cmp_2 show_cmp_2 = True except Exception: show_cmp_2 = False else: show_cmp_2 = False # ----- Plot 1: R x DSigma plot ----- # ax1.set_xscale("log", nonpositive='clip') # MDPL: Best-fit ax1.fill_between( dsig_ref_best['r_mpc'], dsig_ref_best['r_mpc'] * ( dsig_ref_best['dsig'] - dsig_ref_best['dsig_err'] * err_factor), dsig_ref_best['r_mpc'] * ( dsig_ref_best['dsig'] + dsig_ref_best['dsig_err'] * err_factor), alpha=0.2, edgecolor='grey', linewidth=2.0, label=r'__no_label__', facecolor='grey', linestyle='-', rasterized=True) if show_best_cmp: ax1.fill_between( dsig_cmp_best['r_mpc'], dsig_cmp_best['r_mpc'] * ( dsig_cmp_best['dsig'] - dsig_cmp_best['dsig_err'] * err_factor), dsig_cmp_best['r_mpc'] * ( dsig_cmp_best['dsig'] + dsig_cmp_best['dsig_err'] * err_factor), alpha=0.15, edgecolor='grey', linewidth=2.0, label=r'__no_label__', facecolor='grey', linestyle='--', rasterized=True) # Reference DSigma profile ax1.errorbar( r_mpc_obs, r_mpc_obs * dsig_ref_bin['dsigma'][0], yerr=(r_mpc_obs * dsig_ref_bin['dsig_err_{:s}'.format(sig_type)][0]), ecolor=cmap.mpl_colormap(0.6), color=cmap.mpl_colormap(0.6), alpha=0.9, capsize=4, capthick=2.5, elinewidth=2.5, label='__no_label__', fmt='o', zorder=0) ax1.scatter( r_mpc_obs, r_mpc_obs * dsig_ref_bin['dsigma'][0], s=msize_ref, alpha=0.9, facecolor=cmap.mpl_colormap(0.6), edgecolor='w', marker=marker_ref, linewidth=2.5, label=label_ref) # DSigma profiles to compare with ax1.errorbar( r_mpc_obs * 1.01, r_mpc_obs * dsig_cmp_bin['dsigma'][0], yerr=(r_mpc_obs * dsig_cmp_bin['dsig_err_{:s}'.format(sig_type)][0]), ecolor=color, color='w', alpha=0.9, capsize=4, capthick=2.5, elinewidth=2.5, label='__no_label__', fmt='o', zorder=0) ax1.scatter( r_mpc_obs * 1.01, r_mpc_obs * dsig_cmp_bin['dsigma'][0], s=msize_cmp, alpha=0.95, facecolor='w', edgecolor=color, marker=marker_cmp, linewidth=3.0, label=label_cmp) y_max = np.max( [np.max(dsig_ref_best['r_mpc'] * dsig_ref_best['dsig']), np.max(dsig_cmp_best['r_mpc'] * dsig_cmp_best['dsig'])]) * 1.47 ax1.set_ylim(3.1, y_max) # Sample Info if idx == 1: _ = ax1.text(0.2, 0.08, r'$0.19 < z < 0.35$', fontsize=28, transform=ax1.transAxes) _ = ax2.text(0.16, 0.08, r'$\lambda_{\rm SDSS} \geq 20;\ N=191$', fontsize=28, transform=ax2.transAxes) elif idx == 0: _ = ax1.text(0.2, 0.08, r'$0.19 < z < 0.50$', fontsize=28, transform=ax1.transAxes) _ = ax2.text(0.16, 0.08, r'$\lambda_{\rm SDSS} \geq 50;\ N=55$', fontsize=28, transform=ax2.transAxes) if idx == 1: ax1.legend(loc='upper left', fontsize=22, handletextpad=0.04, ncol=2, mode="expand") if idx == len(dsig_ref) - 1: _ = ax1.set_xlabel(r'$R\ [\mathrm{Mpc}]$', fontsize=30) else: ax1.set_xticklabels([]) _ = ax1.set_ylabel(r'$R \times \Delta\Sigma\ [10^{6}\ M_{\odot}/\mathrm{pc}]$', fontsize=32) # ----- Plot 2: Ratio of DSigma plot ----- # ax2.set_xscale("log", nonpositive='clip') ax2.axhline( 1.0, linewidth=3.0, alpha=0.5, color='k', linestyle='--', label='__no_label__', ) # Uncertainty of the model ax2.fill_between( dsig_ref_best['r_mpc'], 1.0 - (dsig_ref_best['dsig_err'] * err_factor / dsig_ref_best['dsig']), 1.0 + (dsig_ref_best['dsig_err'] * err_factor / dsig_ref_best['dsig']), alpha=0.2, edgecolor='none', linewidth=1.0, label='__no_label__', facecolor='grey', rasterized=True) if show_cmp_2: ax2.errorbar( r_mpc_obs * 1.2, ratio_cmp_2, yerr=[ratio_cmp_2_err_low, ratio_cmp_2_err_upp], ecolor=cmap.mpl_colormap(0.3), color='w', alpha=0.5, capsize=4, capthick=2.5, elinewidth=3.0, label='__no_label__', fmt='o', zorder=0) ax2.scatter( r_mpc_obs * 1.2, ratio_cmp_2, s=260, alpha=0.7, facecolor=cmap.mpl_colormap(0.3), edgecolor='w', marker='H', linewidth=3.0, label=label_cmp_2) ax2.errorbar( r_mpc_obs, ratio_cmp, yerr=[ratio_cmp_err_low, ratio_cmp_err_upp], ecolor=color, color='w', alpha=0.8, capsize=4, capthick=2.5, elinewidth=3.0, label='__no_label__', fmt='o', zorder=0) ax2.scatter( r_mpc_obs, ratio_cmp, s=msize_cmp, alpha=0.9, facecolor='w', edgecolor=color, marker=marker_cmp, linewidth=3.0, label=label_cmp) ax2.set_ylim(0.20, 2.49) if np.max(ratio_cmp) < 1.2: y_pos = 0.85 else: y_pos = 0.15 if idx == 1: ax2.legend(loc='upper left', fontsize=22, handletextpad=0.05) if idx == len(dsig_ref) - 1: _ = ax2.set_xlabel(r'$R\ [\mathrm{Mpc}]$', fontsize=30) else: ax2.set_xticklabels([]) _ = ax2.set_ylabel(r'$\Delta\Sigma_{\rm redM}/\Delta\Sigma_{[50, 100]}$', fontsize=35) for tick in ax1.xaxis.get_major_ticks(): tick.label.set_fontsize(30) for tick in ax1.yaxis.get_major_ticks(): tick.label.set_fontsize(30) for tick in ax2.xaxis.get_major_ticks(): tick.label.set_fontsize(30) for tick in ax2.yaxis.get_major_ticks(): tick.label.set_fontsize(30) return fig dsig_cmp_2 = [redm_hsc_sum_2, redm_hsc_sum_1] label_cmp_2 = r'${\rm redM\ HSC}$' dsig_cmp = [redm_sdss_sum_2, redm_sdss_sum_1] label_cmp = r'${\rm redM\ SDSS}$' sub_cmp = r'{\rm redM\ SDSS}z' dsig_ref = [hsc_mout_sum_2, hsc_mout_sum_1] label_ref = r'$M_{\star, [50, 100]}$' sub_ref = r'{[50, 100]}' fig = compare_sdss_redm_profiles( dsig_ref, dsig_cmp, sim_dsig, sig_type='bt', compare_to_model=True, label_ref=label_ref, label_cmp=label_cmp, sub_ref=sub_ref, sub_cmp=sub_cmp, marker_ref='o', marker_cmp='D', msize_ref=220, msize_cmp=160, dsig_cmp_2=dsig_cmp_2, label_cmp_2=label_cmp_2, mvir_min=12.8, middle_title=r'$\rm HSC\ v.s.\ SDSS$') fig.savefig(os.path.join(fig_dir, 'fig_F1.png'), dpi=120) fig.savefig(os.path.join(fig_dir, 'fig_F1.pdf'), dpi=120) redm_hsc_dsig_1['samples'].min() redm_hsc_dsig_2['samples'].min()
_____no_output_____
MIT
notebooks/figure/figF1.ipynb
mattkwiecien/jianbing
Pre-procesamiento de datosTodos los datos operados por un algoritmo de inteligencia artificial deben ser numΓ©ricos en una forma especΓ­fica. Debido a que la mayorΓ­a de datos se encuentran en un formato diferente, que no puede ser utilizado en un algoritmo, estos deben ser convertidos a un formato adecuado. Esta tarea se conoce como `preprocessing`.
import pandas as pd from sklearn.preprocessing import LabelEncoder # Leer los datos del archivo CSV original df = pd.read_csv('telco.csv') df.info() # Revisar la composiciΓ³n de datos en el DataFrame df.head()
_____no_output_____
MIT
clase4/4-1_preprocessing.ipynb
jinchuika/umg-ai
ObjetivoAplicaremos pre-procesamiento de datos para realizar una predicciΓ³n en la columna `MonthlyCharges`, intentando predecir los pagos mensuales de un cliente. Para ello, seleccionamos las columnas que puedan servir como caracterΓ­sticas (`X`) y nuestra variable objetivo (`y`).
# Elegir las columnas a utilizar columnas_importantes = [ 'gender', 'Partner', 'Dependents', 'SeniorCitizen', 'PhoneService', 'MultipleLines', 'InternetService', 'MonthlyCharges' ] # Nuevo DataFrame con las columnas seleccionadas df_limpio = df[columnas_importantes] # Revisamos una muestra de los datos de nuestro nuevo DataFrame df_limpio.head()
_____no_output_____
MIT
clase4/4-1_preprocessing.ipynb
jinchuika/umg-ai
Como podemos observar, la mayorΓ­a de columnas con datos categΓ³ricos se encuentran en formato de texto. Si deseamos utilizar esos datos en un algoritmo de inteligencia artificial, necesitamos convertirlos en representaciones numΓ©ricas. Para ello, realizaremos una tarea de pre-procesamiento utilizando un codificador de tipo `LabelEncoder`.
# Crear el encoder para la columna gender gender_encoder = LabelEncoder() # "Aprender" los datos para el encoder. # Este proceso reconoce las diferentes categorΓ­as en los datos. # Para cada categorΓ­a, asignarΓ‘ un nΓΊmero iniciando desde 0 gender_encoder.fit(df_limpio['gender']) # Podemos transformar los datos originales en sus representaciones numΓ©ricas gender_encoder.fit_transform(df_limpio['gender']) # Si quisiΓ©ramos transofrmar los datos de forma inversa, es decir, # obtener los datos originales a partir de sus representaciones, # podemos utilizar el mΓ©todo invevrse_transform gender_encoder.inverse_transform([0, 1, 1, 1, 0]) # Podemos visualizar las clases encontradas por el encoder gender_encoder.classes_
_____no_output_____
MIT
clase4/4-1_preprocessing.ipynb
jinchuika/umg-ai
Una forma mΓ‘s eficienteYa que necesitamos un codificador para cada columna, crearemos una estructura de datos que permita almacenar un codificador distinto para cada columna.
# Empezamos revisando la de columnas del DataFrame for columna in df_limpio.columns: print(columna)
gender Partner Dependents SeniorCitizen PhoneService MultipleLines InternetService MonthlyCharges
MIT
clase4/4-1_preprocessing.ipynb
jinchuika/umg-ai
Ya que no todas las columnas necesitan ser codificadas (`SeniorCitizen` ya se encuentra codificada y `MonthlyCharges` no es un dato de tipo categΓ³rico), incluiremos ΓΊnicamente las columnas con datos categΓ³ricos que necesitan ser codificados.
# Creamos un nuevo DataFrame que tendrΓ‘ los datos convertidos df_final = pd.DataFrame() # En un diccionario, almacenamos un codificador por cada columna encoders = { 'gender': LabelEncoder(), 'Partner': LabelEncoder(), 'Dependents': LabelEncoder(), 'PhoneService': LabelEncoder(), 'MultipleLines': LabelEncoder(), 'InternetService': LabelEncoder(), } # Codificar cada columna y agregarla al nuevo DataFrame for columna, encoder in encoders.items(): encoder.fit(df_limpio[columna]) df_final[columna] = encoder.fit_transform(df_limpio[columna]) df_final.head() # RevisiΓ³n de todas las clases obtenidas por los codificadores for column, encoder in encoders.items(): print(column, encoder.classes_) print('=======') # Agregamos las columnas que no transformamos al DataFrame final df_final['SeniorCitizen'] = df_limpio['SeniorCitizen'] df_final['MonthlyCharges'] = df_limpio['MonthlyCharges'] df_final.head() # Exportar los datos codificados a un nuevo archivo CSV df_final.to_csv('datos_limpios.csv', index=None)
_____no_output_____
MIT
clase4/4-1_preprocessing.ipynb
jinchuika/umg-ai
*Data Science Unit 4 Sprint 3 Assignment 2* Convolutional Neural Networks (CNNs) Assignment- Part 1: Pre-Trained Model- Part 2: Custom CNN Model- Part 3: CNN with Data AugmentationYou will apply three different CNN models to a binary image classification model using Keras. Classify images of Mountains (`./data/mountain/*`) and images of forests (`./data/forest/*`). Treat mountains as the postive class (1) and the forest images as the negative (zero). |Mountain (+)|Forest (-)||---|---||![](./data/mountain/art1131.jpg)|![](./data/forest/cdmc317.jpg)|The problem is realively difficult given that the sample is tiny: there are about 350 observations per class. This sample size might be something that you can expect with prototyping an image classification problem/solution at work. Get accustomed to evaluating several differnet possible models. Pre - Trained ModelLoad a pretrained network from Keras, [ResNet50](https://tfhub.dev/google/imagenet/resnet_v1_50/classification/1) - a 50 layer deep network trained to recognize [1000 objects](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). Starting usage:```pythonimport numpy as npfrom tensorflow.keras.applications.resnet50 import ResNet50from tensorflow.keras.preprocessing import imagefrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictionsfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D()from tensorflow.keras.models import Model This is the functional APIresnet = ResNet50(weights='imagenet', include_top=False)```The `include_top` parameter in `ResNet50` will remove the full connected layers from the ResNet model. The next step is to turn off the training of the ResNet layers. We want to use the learned parameters without updating them in future training passes. ```pythonfor layer in resnet.layers: layer.trainable = False```Using the Keras functional API, we will need to additional additional full connected layers to our model. We we removed the top layers, we removed all preivous fully connected layers. In other words, we kept only the feature processing portions of our network. You can expert with additional layers beyond what's listed here. The `GlobalAveragePooling2D` layer functions as a really fancy flatten function by taking the average of each of the last convolutional layer outputs (which is two dimensional still). ```pythonx = res.outputx = GlobalAveragePooling2D()(x) This layer is a really fancy flattenx = Dense(1024, activation='relu')(x)predictions = Dense(1, activation='sigmoid')(x)model = Model(res.input, predictions)```Your assignment is to apply the transfer learning above to classify images of Mountains (`./data/mountain/*`) and images of forests (`./data/forest/*`). Treat mountains as the postive class (1) and the forest images as the negative (zero). Steps to complete assignment: 1. Load in Image Data into numpy arrays (`X`) 2. Create a `y` for the labels3. Train your model with pretrained layers from resnet4. Report your model's accuracy Load in Data![skimage-logo](https://scikit-image.org/_static/img/logo.png)Check out out [`skimage`](https://scikit-image.org/) for useful functions related to processing the images. In particular checkout the documentation for `skimage.io.imread_collection` and `skimage.transform.resize`.
import numpy as np import os from skimage import color, io from sklearn.model_selection import train_test_split from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.preprocessing import image from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions from tensorflow.keras.layers import Dense, GlobalAveragePooling2D from tensorflow.keras.models import Model resnet = ResNet50(weights='imagenet', include_top=False) forests = io.imread_collection('./data/forest/*') mountains = io.imread_collection('./data/mountain/*') len(forests), len(mountains) forest_labels = np.zeros(len(forests)) mountain_labels = np.ones(len(mountains)) labels = np.concatenate((forest_labels, mountain_labels), axis=0) pics = np.concatenate((forests, mountains), axis=0) labels_train = labels[:500] labels_test = labels[500:] pics_train = pics[:500] pics_test = pics[500:] len(labels_train), len(labels_test)
_____no_output_____
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb
j-m-d-h/DS-Unit-4-Sprint-3-Deep-Learning
Instatiate Model
for layer in resnet.layers: layer.trainable = False x = resnet.output x = GlobalAveragePooling2D()(x) x = Dense(1024, activation='relu')(x) predictions = Dense(1, activation='sigmoid')(x) model = Model(resnet.input, predictions) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
_____no_output_____
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb
j-m-d-h/DS-Unit-4-Sprint-3-Deep-Learning
Fit Model
model.fit(pics_train, labels_train, validation_data=(pics_test, labels_test), epochs=4) model.predict(pics[206])
_____no_output_____
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb
j-m-d-h/DS-Unit-4-Sprint-3-Deep-Learning
Custom CNN ModelIn this step, write and train your own convolutional neural network using Keras. You can use any architecture that suits you as long as it has at least one convolutional and one pooling layer at the beginning of the network - you can add more if you want.
Train on 561 samples, validate on 141 samples Epoch 1/5 561/561 [==============================] - 18s 32ms/sample - loss: 0.2667 - accuracy: 0.9073 - val_loss: 0.1186 - val_accuracy: 0.9858 Epoch 2/5 561/561 [==============================] - 18s 32ms/sample - loss: 0.2046 - accuracy: 0.9073 - val_loss: 0.3342 - val_accuracy: 0.8511 Epoch 3/5 561/561 [==============================] - 18s 32ms/sample - loss: 0.1778 - accuracy: 0.9287 - val_loss: 0.2746 - val_accuracy: 0.8723 Epoch 4/5 561/561 [==============================] - 18s 32ms/sample - loss: 0.1681 - accuracy: 0.9323 - val_loss: 0.8487 - val_accuracy: 0.5957 Epoch 5/5 561/561 [==============================] - 18s 32ms/sample - loss: 0.1606 - accuracy: 0.9394 - val_loss: 0.3903 - val_accuracy: 0.8582
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb
j-m-d-h/DS-Unit-4-Sprint-3-Deep-Learning
Custom CNN Model with Image Manipulations *This a stretch goal, and it's relatively difficult*To simulate an increase in a sample of image, you can apply image manipulation techniques: cropping, rotation, stretching, etc. Luckily Keras has some handy functions for us to apply these techniques to our mountain and forest example. Check out these resources to help you get started: 1. [Keras `ImageGenerator` Class](https://keras.io/preprocessing/image/imagedatagenerator-class)2. [Building a powerful image classifier with very little data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html)
# State Code for Image Manipulation Here
_____no_output_____
MIT
module2-convolutional-neural-networks/LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb
j-m-d-h/DS-Unit-4-Sprint-3-Deep-Learning
Set Up
# imports and juppyter environment settings %run -i settings.py # setup environment variables %run -i setup.py # To auto-reload modules in jupyter notebook (so that changes in files *.py doesn't require manual reloading): # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %reload_ext autoreload %autoreload 2 # you can now enable 2x images by just adding the line: # see: https://gist.github.com/minrk/3301035 %config InlineBackend.figure_format = 'png' file = f'{BASE_DIR}/data/circuit_court_2009_2019.csv.gz' df = pd.read_csv(file, parse_dates =['offense_date']) df.head(5)
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
Summary Statistics
# In this example, the data frame is described and [β€˜object’] is passed to include parameter # to see description of object series. [.20, .40, .60, .80] is passed to percentile parameter # to view the respective percentile of Numeric series. # see: https://www.geeksforgeeks.org/python-pandas-dataframe-describe-method/ perc = [0.20, .40, .60, 0.80] include = ['object', 'float', 'int'] df.describe(percentiles= perc, include=include).T
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
Data Cleaning
# number of na values in each column df.isna().sum()
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
For our study, 'personId', 'offense_date', 'final_disposition', 'fips', 'race', 'gender', 'charge', 'fips_area' are mandatory columns.So removing all records where these field values are missing will be removed
df.dropna(axis=0, subset=['final_disposition'], inplace=True) df.dropna(axis=0, subset=['race'], inplace=True) df.isna().sum() # for the moment we consider case_class as important field, so we will impute missing value 'unknown' df['ammended_charge'].fillna('unknown', inplace=True) df.isna().sum() duplicate_records = pd.DataFrame(df.duplicated(), columns=['isduplicate']) duplicate_records = duplicate_records.reset_index() duplicate_records.columns = [str(column) for column in duplicate_records.columns] duplicate_records.set_index('index') duplicate_records.groupby('isduplicate').count()
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
Our dataset has 974935 duplicate records, so we purge these records from our dataset
#drop duplicate records df = df.drop_duplicates(['person_id', 'offense_date', 'final_disposition', 'fips', 'race', 'gender', 'charge', 'ammended_charge']) len(df)
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
df * We can see that hearing_date, fips, gender, charge_type and person_id do not have any missing data, however we need to address the missing data for other columns.
df.isna().sum()
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
Per our domain expert if ammended_charge is reduced to **'DWI'** then the case is a candidate for expungement.
dwi_ammended_charge_df = df [df.ammended_charge.str.contains('DWI') == True] len(dwi_ammended_charge_df)
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
Our aim is to derive the cadidate cases for expungement based on hearing_results, let us examine the uniuqe values for hearing_result
# top 15 hearing results hearing_result_counts = df['final_disposition'].value_counts() subset = hearing_result_counts[:15] sns.barplot(y=subset.index, x=subset.values) # to 15 hearing results by gender df_result_bygendger = df.groupby(['final_disposition', 'gender'])\ .size()\ .unstack()\ .fillna(0)\ .sort_values(['Female', 'Male'], ascending=False) df_stacked = df_result_bygendger.head(15).stack() df_stacked.name = 'total' df_stacked= df_stacked.reset_index() sns.barplot(x='total', y='final_disposition', hue='gender', data= df_stacked)
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
Preprocessing Data Adding fips_area name column
#load fips code table fips_file = 'reference-data/va-fips-codes.csv' fips_df = pd.read_csv(fips_file) fips_df = fips_df[['CountyFIPSCode', 'GUName']] fips_df #add fips_GUName df = pd.merge(df,fips_df,left_on='fips', right_on='CountyFIPSCode', how='left')\ .drop(columns=['CountyFIPSCode'], axis=1)\ .rename(columns={'GUName': 'fips_area'}) df
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
Identifying the candidates for case expungement Assumptions maded based on domain expert input:1. If the final_disposition is any the following values 'Dismissed','Noile Prosequi','Not Guilty', 'Withdrawn', 'Not Found', 'No Indictment Presented', 'No Longer Under Advisement', 'Not True Bill' then the case is candidate for expungement.2. **TODO:** If the charges are ammended to DWI then also it can be a candidate for expungement if the person does not have any prior felony charges
cand_list =['Dismissed','Noile Prosequi','Not Guilty', 'Withdrawn', 'Not Found', 'No Indictment Presented', 'No Longer Under Advisement', 'Not True Bill'] df['candidate'] = [1 if x in cand_list else 0 for x in df['final_disposition']] df df.groupby(['candidate']).count().head() df.groupby(['candidate','race','gender']).count() df.to_csv( PROCESSED_PATH + "district_court_2009_2019_cleansed.csv.gz", index=False, compression="gzip", header=True, quotechar='"', doublequote=True, line_terminator="\n", ) delete_file(PROCESSED_PATH + "district_court_2009_2019.csv.gz") #!jupyter nbconvert va_circuit_court_eda.ipynb --to pdf
_____no_output_____
MIT
va_circuit_court_eda.ipynb
mh4ey/va-case-expungement
Implementing ResNet in PyTorchToday we are going to implement the famous ResNet from Kaiming He et al. (Microsoft Research). It won the 1st place on the ILSVRC 2015 classification task.The original paper can be read from [here ](https://arxiv.org/abs/1512.03385) and it is very easy to follow, additional material can be found in this [quora answer](https://www.quora.com/)![alt](./images/custom/rotated-resnet34.png)*Deeper neural networks are more difficult to train.* Why? One big problem of deeper network is the vanishing gradient. Basically, the model is not able to learn anymore.To solve this problem, the Authors proposed to use a reference to the previous layer to compute the output at a given layer. In ResNet, the output form the previous layer, called **residual**, is added to the output of the current layer. The following picture visualizes this operation![alt](./images/residual.png)We are going to make our implementation **as scalable as possible** using one think think unknown to mostly of the data scientiest: **object orienting programming** Basic BlockOkay, the first thing is to think about what we need. Well, first of all we need a convolution layer and since PyTorch does not have the 'auto' padding in Conv2d, so we have to code ourself!
class Conv2dAuto(nn.Conv2d): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.padding = (self.kernel_size[0] // 2, self.kernel_size[1] // 2) # dynamic add padding based on the kernel_size conv3x3 = partial(Conv2dAuto, kernel_size=3, bias=False) conv = conv3x3(in_channels=32, out_channels=64) print(conv) del conv
Conv2dAuto(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
MIT
ResNet.ipynb
harshithbelagur/ResNet
Residual BlockTo make clean code is mandatory to think about the main building block of each application, or of the network in our case. The residual block takes an input with `in_channels`, applies some blocks of convolutional layers to reduce it to `out_channels` and sum it up to the original input. If their sizes mismatch, then the input goes into an `identity`. We can abstract this process and create a interface that can be extedend.
class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels): super().__init__() self.in_channels, self.out_channels = in_channels, out_channels self.blocks = nn.Identity() self.shortcut = nn.Identity() def forward(self, x): residual = x if self.should_apply_shortcut: residual = self.shortcut(x) x = self.blocks(x) x += residual return x @property def should_apply_shortcut(self): return self.in_channels != self.out_channels ResidualBlock(32, 64)
_____no_output_____
MIT
ResNet.ipynb
harshithbelagur/ResNet
Let's test it with a dummy vector with one one, we should get a vector with two
dummy = torch.ones((1, 1, 1, 1)) block = ResidualBlock(1, 64) block(dummy)
_____no_output_____
MIT
ResNet.ipynb
harshithbelagur/ResNet
In ResNet each block has a expansion parameter in order to increase the `out_channels`. Also, the identity is defined as a Convolution followed by an Activation layer, this is referred as `shortcut`. Then, we can just extend `ResidualBlock` and defined the `shortcut` function.
from collections import OrderedDict class ResNetResidualBlock(ResidualBlock): def __init__(self, in_channels, out_channels, expansion=1, downsampling=1, conv=conv3x3, *args, **kwargs): super().__init__(in_channels, out_channels) self.expansion, self.downsampling, self.conv = expansion, downsampling, conv self.shortcut = nn.Sequential(OrderedDict( { 'conv' : nn.Conv2d(self.in_channels, self.expanded_channels, kernel_size=1, stride=self.downsampling, bias=False), 'bn' : nn.BatchNorm2d(self.expanded_channels) })) if self.should_apply_shortcut else None @property def expanded_channels(self): return self.out_channels * self.expansion @property def should_apply_shortcut(self): return self.in_channels != self.expanded_channels ResNetResidualBlock(32, 64)
_____no_output_____
MIT
ResNet.ipynb
harshithbelagur/ResNet
Basic BlockA basic ResNet block is composed by two layers of `3x3` convs/batchnorm/relu. In the picture, the lines represnet the residual operation. The dotted line means that the shortcut was applied to match the input and the output dimension.![alt](./images/custom/Block.png) Let's first create an handy function to stack one conv and batchnorm layer. Using `OrderedDict` to properly name each sublayer.
from collections import OrderedDict def conv_bn(in_channels, out_channels, conv, *args, **kwargs): return nn.Sequential(OrderedDict({'conv': conv(in_channels, out_channels, *args, **kwargs), 'bn': nn.BatchNorm2d(out_channels) })) conv_bn(3, 3, nn.Conv2d, kernel_size=3) class ResNetBasicBlock(ResNetResidualBlock): expansion = 1 def __init__(self, in_channels, out_channels, activation=nn.ReLU, *args, **kwargs): super().__init__(in_channels, out_channels, *args, **kwargs) self.blocks = nn.Sequential( conv_bn(self.in_channels, self.out_channels, conv=self.conv, bias=False, stride=self.downsampling), activation(), conv_bn(self.out_channels, self.expanded_channels, conv=self.conv, bias=False), ) dummy = torch.ones((1, 32, 224, 224)) block = ResNetBasicBlock(32, 64) block(dummy).shape print(block) '''The shortcut is the residual (the solution for vanishing gradient, skipping the 2 layers and directly contributing to the output)'''
ResNetBasicBlock( (blocks): Sequential( (0): Sequential( (conv): Conv2dAuto(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): ReLU() (2): Sequential( (conv): Conv2dAuto(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (shortcut): Sequential( (conv): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) )
MIT
ResNet.ipynb
harshithbelagur/ResNet
BottleNeckTo increase the network deepths but to decrese the number of parameters, the Authors defined a BottleNeck block that "The three layers are 1x1, 3x3, and 1x1 convolutions, where the 1Γ—1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3Γ—3 layer a bottleneck with smaller input/output dimensions." We can extend the `ResNetResidualBlock` and create these blocks.
class ResNetBottleNeckBlock(ResNetResidualBlock): expansion = 4 def __init__(self, in_channels, out_channels, activation=nn.ReLU, *args, **kwargs): super().__init__(in_channels, out_channels, expansion=4, *args, **kwargs) self.blocks = nn.Sequential( conv_bn(self.in_channels, self.out_channels, self.conv, kernel_size=1), activation(), conv_bn(self.out_channels, self.out_channels, self.conv, kernel_size=3, stride=self.downsampling), activation(), conv_bn(self.out_channels, self.expanded_channels, self.conv, kernel_size=1), ) dummy = torch.ones((1, 32, 10, 10)) block = ResNetBottleNeckBlock(32, 64) block(dummy).shape print(block)
ResNetBottleNeckBlock( (blocks): Sequential( (0): Sequential( (conv): Conv2dAuto(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): ReLU() (2): Sequential( (conv): Conv2dAuto(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (3): ReLU() (4): Sequential( (conv): Conv2dAuto(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (shortcut): Sequential( (conv): Conv2d(32, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) )
MIT
ResNet.ipynb
harshithbelagur/ResNet
LayerA ResNet's layer is composed by blocks stacked one after the other. ![alt](./images/custom/Layer.png)We can easily defined it by just stuck `n` blocks one after the other, just remember that the first convolution block has a stide of two since "We perform downsampling directly by convolutional layers that have a stride of 2".
class ResNetLayer(nn.Module): def __init__(self, in_channels, out_channels, block=ResNetBasicBlock, n=1, *args, **kwargs): super().__init__() # 'We perform downsampling directly by convolutional layers that have a stride of 2.' downsampling = 2 if in_channels != out_channels else 1 self.blocks = nn.Sequential( block(in_channels , out_channels, *args, **kwargs, downsampling=downsampling), *[block(out_channels * block.expansion, out_channels, downsampling=1, *args, **kwargs) for _ in range(n - 1)] ) def forward(self, x): x = self.blocks(x) return x dummy = torch.ones((1, 32, 48, 48)) layer = ResNetLayer(64, 128, block=ResNetBasicBlock, n=3) # layer(dummy).shape layer
_____no_output_____
MIT
ResNet.ipynb
harshithbelagur/ResNet
EncoderSimilarly, the encoder is composed by multiple layer at increasing features size.![alt](./images/custom/rotated-Encoder.png)
class ResNetEncoder(nn.Module): """ ResNet encoder composed by increasing different layers with increasing features. """ def __init__(self, in_channels=3, blocks_sizes=[64, 128, 256, 512], deepths=[2,2,2,2], activation=nn.ReLU, block=ResNetBasicBlock, *args,**kwargs): super().__init__() self.blocks_sizes = blocks_sizes self.gate = nn.Sequential( nn.Conv2d(in_channels, self.blocks_sizes[0], kernel_size=7, stride=2, padding=3, bias=False), nn.BatchNorm2d(self.blocks_sizes[0]), activation(), nn.MaxPool2d(kernel_size=3, stride=2, padding=1) ) self.in_out_block_sizes = list(zip(blocks_sizes, blocks_sizes[1:])) self.blocks = nn.ModuleList([ ResNetLayer(blocks_sizes[0], blocks_sizes[0], n=deepths[0], activation=activation, block=block, *args, **kwargs), *[ResNetLayer(in_channels * block.expansion, out_channels, n=n, activation=activation, block=block, *args, **kwargs) for (in_channels, out_channels), n in zip(self.in_out_block_sizes, deepths[1:])] ]) def forward(self, x): x = self.gate(x) for block in self.blocks: x = block(x) return x
_____no_output_____
MIT
ResNet.ipynb
harshithbelagur/ResNet
DecoderThe decoder is the last piece we need to create the full network. It is a fully connected layer that maps the features learned by the network to their respective classes. Easily, we can defined it as:
class ResnetDecoder(nn.Module): """ This class represents the tail of ResNet. It performs a global pooling and maps the output to the correct class by using a fully connected layer. """ def __init__(self, in_features, n_classes): super().__init__() self.avg = nn.AdaptiveAvgPool2d((1, 1)) self.decoder = nn.Linear(in_features, n_classes) def forward(self, x): x = self.avg(x) x = x.view(x.size(0), -1) x = self.decoder(x) return x
_____no_output_____
MIT
ResNet.ipynb
harshithbelagur/ResNet
ResNetFinal, we can put all the pieces together and create the final model.![alt](./images/custom/rotated-resnet34.png)
class ResNet(nn.Module): def __init__(self, in_channels, n_classes, *args, **kwargs): super().__init__() self.encoder = ResNetEncoder(in_channels, *args, **kwargs) self.decoder = ResnetDecoder(self.encoder.blocks[-1].blocks[-1].expanded_channels, n_classes) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x
_____no_output_____
MIT
ResNet.ipynb
harshithbelagur/ResNet
We can now defined the five models proposed by the Authors, `resnet18,34,50,101,152`
def resnet18(in_channels, n_classes): return ResNet(in_channels, n_classes, block=ResNetBasicBlock, deepths=[2, 2, 2, 2]) def resnet34(in_channels, n_classes): return ResNet(in_channels, n_classes, block=ResNetBasicBlock, deepths=[3, 4, 6, 3]) def resnet50(in_channels, n_classes): return ResNet(in_channels, n_classes, block=ResNetBottleNeckBlock, deepths=[3, 4, 6, 3]) def resnet101(in_channels, n_classes): return ResNet(in_channels, n_classes, block=ResNetBottleNeckBlock, deepths=[3, 4, 23, 3]) def resnet152(in_channels, n_classes): return ResNet(in_channels, n_classes, block=ResNetBottleNeckBlock, deepths=[3, 8, 36, 3]) from torchsummary import summary model = resnet101(3, 1000) summary(model.cuda(), (3, 224, 224)) import torchvision.models as models # resnet101(False) summary(models.resnet101(False).cuda(), (3, 224, 224))
---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 112, 112] 9,408 BatchNorm2d-2 [-1, 64, 112, 112] 128 ReLU-3 [-1, 64, 112, 112] 0 MaxPool2d-4 [-1, 64, 56, 56] 0 Conv2d-5 [-1, 64, 56, 56] 4,096 BatchNorm2d-6 [-1, 64, 56, 56] 128 ReLU-7 [-1, 64, 56, 56] 0 Conv2d-8 [-1, 64, 56, 56] 36,864 BatchNorm2d-9 [-1, 64, 56, 56] 128 ReLU-10 [-1, 64, 56, 56] 0 Conv2d-11 [-1, 256, 56, 56] 16,384 BatchNorm2d-12 [-1, 256, 56, 56] 512 Conv2d-13 [-1, 256, 56, 56] 16,384 BatchNorm2d-14 [-1, 256, 56, 56] 512 ReLU-15 [-1, 256, 56, 56] 0 Bottleneck-16 [-1, 256, 56, 56] 0 Conv2d-17 [-1, 64, 56, 56] 16,384 BatchNorm2d-18 [-1, 64, 56, 56] 128 ReLU-19 [-1, 64, 56, 56] 0 Conv2d-20 [-1, 64, 56, 56] 36,864 BatchNorm2d-21 [-1, 64, 56, 56] 128 ReLU-22 [-1, 64, 56, 56] 0 Conv2d-23 [-1, 256, 56, 56] 16,384 BatchNorm2d-24 [-1, 256, 56, 56] 512 ReLU-25 [-1, 256, 56, 56] 0 Bottleneck-26 [-1, 256, 56, 56] 0 Conv2d-27 [-1, 64, 56, 56] 16,384 BatchNorm2d-28 [-1, 64, 56, 56] 128 ReLU-29 [-1, 64, 56, 56] 0 Conv2d-30 [-1, 64, 56, 56] 36,864 BatchNorm2d-31 [-1, 64, 56, 56] 128 ReLU-32 [-1, 64, 56, 56] 0 Conv2d-33 [-1, 256, 56, 56] 16,384 BatchNorm2d-34 [-1, 256, 56, 56] 512 ReLU-35 [-1, 256, 56, 56] 0 Bottleneck-36 [-1, 256, 56, 56] 0 Conv2d-37 [-1, 128, 56, 56] 32,768 BatchNorm2d-38 [-1, 128, 56, 56] 256 ReLU-39 [-1, 128, 56, 56] 0 Conv2d-40 [-1, 128, 28, 28] 147,456 BatchNorm2d-41 [-1, 128, 28, 28] 256 ReLU-42 [-1, 128, 28, 28] 0 Conv2d-43 [-1, 512, 28, 28] 65,536 BatchNorm2d-44 [-1, 512, 28, 28] 1,024 Conv2d-45 [-1, 512, 28, 28] 131,072 BatchNorm2d-46 [-1, 512, 28, 28] 1,024 ReLU-47 [-1, 512, 28, 28] 0 Bottleneck-48 [-1, 512, 28, 28] 0 Conv2d-49 [-1, 128, 28, 28] 65,536 BatchNorm2d-50 [-1, 128, 28, 28] 256 ReLU-51 [-1, 128, 28, 28] 0 Conv2d-52 [-1, 128, 28, 28] 147,456 BatchNorm2d-53 [-1, 128, 28, 28] 256 ReLU-54 [-1, 128, 28, 28] 0 Conv2d-55 [-1, 512, 28, 28] 65,536 BatchNorm2d-56 [-1, 512, 28, 28] 1,024 ReLU-57 [-1, 512, 28, 28] 0 Bottleneck-58 [-1, 512, 28, 28] 0 Conv2d-59 [-1, 128, 28, 28] 65,536 BatchNorm2d-60 [-1, 128, 28, 28] 256 ReLU-61 [-1, 128, 28, 28] 0 Conv2d-62 [-1, 128, 28, 28] 147,456 BatchNorm2d-63 [-1, 128, 28, 28] 256 ReLU-64 [-1, 128, 28, 28] 0 Conv2d-65 [-1, 512, 28, 28] 65,536 BatchNorm2d-66 [-1, 512, 28, 28] 1,024 ReLU-67 [-1, 512, 28, 28] 0 Bottleneck-68 [-1, 512, 28, 28] 0 Conv2d-69 [-1, 128, 28, 28] 65,536 BatchNorm2d-70 [-1, 128, 28, 28] 256 ReLU-71 [-1, 128, 28, 28] 0 Conv2d-72 [-1, 128, 28, 28] 147,456 BatchNorm2d-73 [-1, 128, 28, 28] 256 ReLU-74 [-1, 128, 28, 28] 0 Conv2d-75 [-1, 512, 28, 28] 65,536 BatchNorm2d-76 [-1, 512, 28, 28] 1,024 ReLU-77 [-1, 512, 28, 28] 0 Bottleneck-78 [-1, 512, 28, 28] 0 Conv2d-79 [-1, 256, 28, 28] 131,072 BatchNorm2d-80 [-1, 256, 28, 28] 512 ReLU-81 [-1, 256, 28, 28] 0 Conv2d-82 [-1, 256, 14, 14] 589,824 BatchNorm2d-83 [-1, 256, 14, 14] 512 ReLU-84 [-1, 256, 14, 14] 0 Conv2d-85 [-1, 1024, 14, 14] 262,144 BatchNorm2d-86 [-1, 1024, 14, 14] 2,048 Conv2d-87 [-1, 1024, 14, 14] 524,288 BatchNorm2d-88 [-1, 1024, 14, 14] 2,048 ReLU-89 [-1, 1024, 14, 14] 0 Bottleneck-90 [-1, 1024, 14, 14] 0 Conv2d-91 [-1, 256, 14, 14] 262,144 BatchNorm2d-92 [-1, 256, 14, 14] 512 ReLU-93 [-1, 256, 14, 14] 0 Conv2d-94 [-1, 256, 14, 14] 589,824 BatchNorm2d-95 [-1, 256, 14, 14] 512 ReLU-96 [-1, 256, 14, 14] 0 Conv2d-97 [-1, 1024, 14, 14] 262,144 BatchNorm2d-98 [-1, 1024, 14, 14] 2,048 ReLU-99 [-1, 1024, 14, 14] 0 Bottleneck-100 [-1, 1024, 14, 14] 0 Conv2d-101 [-1, 256, 14, 14] 262,144 BatchNorm2d-102 [-1, 256, 14, 14] 512 ReLU-103 [-1, 256, 14, 14] 0 Conv2d-104 [-1, 256, 14, 14] 589,824 BatchNorm2d-105 [-1, 256, 14, 14] 512 ReLU-106 [-1, 256, 14, 14] 0 Conv2d-107 [-1, 1024, 14, 14] 262,144 BatchNorm2d-108 [-1, 1024, 14, 14] 2,048 ReLU-109 [-1, 1024, 14, 14] 0 Bottleneck-110 [-1, 1024, 14, 14] 0 Conv2d-111 [-1, 256, 14, 14] 262,144 BatchNorm2d-112 [-1, 256, 14, 14] 512 ReLU-113 [-1, 256, 14, 14] 0 Conv2d-114 [-1, 256, 14, 14] 589,824 BatchNorm2d-115 [-1, 256, 14, 14] 512 ReLU-116 [-1, 256, 14, 14] 0 Conv2d-117 [-1, 1024, 14, 14] 262,144 BatchNorm2d-118 [-1, 1024, 14, 14] 2,048 ReLU-119 [-1, 1024, 14, 14] 0 Bottleneck-120 [-1, 1024, 14, 14] 0 Conv2d-121 [-1, 256, 14, 14] 262,144 BatchNorm2d-122 [-1, 256, 14, 14] 512 ReLU-123 [-1, 256, 14, 14] 0 Conv2d-124 [-1, 256, 14, 14] 589,824 BatchNorm2d-125 [-1, 256, 14, 14] 512 ReLU-126 [-1, 256, 14, 14] 0 Conv2d-127 [-1, 1024, 14, 14] 262,144 BatchNorm2d-128 [-1, 1024, 14, 14] 2,048 ReLU-129 [-1, 1024, 14, 14] 0 Bottleneck-130 [-1, 1024, 14, 14] 0 Conv2d-131 [-1, 256, 14, 14] 262,144 BatchNorm2d-132 [-1, 256, 14, 14] 512 ReLU-133 [-1, 256, 14, 14] 0 Conv2d-134 [-1, 256, 14, 14] 589,824 BatchNorm2d-135 [-1, 256, 14, 14] 512 ReLU-136 [-1, 256, 14, 14] 0 Conv2d-137 [-1, 1024, 14, 14] 262,144 BatchNorm2d-138 [-1, 1024, 14, 14] 2,048 ReLU-139 [-1, 1024, 14, 14] 0 Bottleneck-140 [-1, 1024, 14, 14] 0 Conv2d-141 [-1, 256, 14, 14] 262,144 BatchNorm2d-142 [-1, 256, 14, 14] 512 ReLU-143 [-1, 256, 14, 14] 0 Conv2d-144 [-1, 256, 14, 14] 589,824 BatchNorm2d-145 [-1, 256, 14, 14] 512 ReLU-146 [-1, 256, 14, 14] 0 Conv2d-147 [-1, 1024, 14, 14] 262,144 BatchNorm2d-148 [-1, 1024, 14, 14] 2,048 ReLU-149 [-1, 1024, 14, 14] 0 Bottleneck-150 [-1, 1024, 14, 14] 0 Conv2d-151 [-1, 256, 14, 14] 262,144 BatchNorm2d-152 [-1, 256, 14, 14] 512 ReLU-153 [-1, 256, 14, 14] 0 Conv2d-154 [-1, 256, 14, 14] 589,824 BatchNorm2d-155 [-1, 256, 14, 14] 512 ReLU-156 [-1, 256, 14, 14] 0 Conv2d-157 [-1, 1024, 14, 14] 262,144 BatchNorm2d-158 [-1, 1024, 14, 14] 2,048 ReLU-159 [-1, 1024, 14, 14] 0 Bottleneck-160 [-1, 1024, 14, 14] 0 Conv2d-161 [-1, 256, 14, 14] 262,144 BatchNorm2d-162 [-1, 256, 14, 14] 512 ReLU-163 [-1, 256, 14, 14] 0 Conv2d-164 [-1, 256, 14, 14] 589,824 BatchNorm2d-165 [-1, 256, 14, 14] 512 ReLU-166 [-1, 256, 14, 14] 0 Conv2d-167 [-1, 1024, 14, 14] 262,144 BatchNorm2d-168 [-1, 1024, 14, 14] 2,048 ReLU-169 [-1, 1024, 14, 14] 0 Bottleneck-170 [-1, 1024, 14, 14] 0 Conv2d-171 [-1, 256, 14, 14] 262,144 BatchNorm2d-172 [-1, 256, 14, 14] 512 ReLU-173 [-1, 256, 14, 14] 0 Conv2d-174 [-1, 256, 14, 14] 589,824 BatchNorm2d-175 [-1, 256, 14, 14] 512 ReLU-176 [-1, 256, 14, 14] 0 Conv2d-177 [-1, 1024, 14, 14] 262,144 BatchNorm2d-178 [-1, 1024, 14, 14] 2,048 ReLU-179 [-1, 1024, 14, 14] 0 Bottleneck-180 [-1, 1024, 14, 14] 0 Conv2d-181 [-1, 256, 14, 14] 262,144 BatchNorm2d-182 [-1, 256, 14, 14] 512 ReLU-183 [-1, 256, 14, 14] 0 Conv2d-184 [-1, 256, 14, 14] 589,824 BatchNorm2d-185 [-1, 256, 14, 14] 512 ReLU-186 [-1, 256, 14, 14] 0 Conv2d-187 [-1, 1024, 14, 14] 262,144 BatchNorm2d-188 [-1, 1024, 14, 14] 2,048 ReLU-189 [-1, 1024, 14, 14] 0 Bottleneck-190 [-1, 1024, 14, 14] 0 Conv2d-191 [-1, 256, 14, 14] 262,144 BatchNorm2d-192 [-1, 256, 14, 14] 512 ReLU-193 [-1, 256, 14, 14] 0 Conv2d-194 [-1, 256, 14, 14] 589,824 BatchNorm2d-195 [-1, 256, 14, 14] 512 ReLU-196 [-1, 256, 14, 14] 0 Conv2d-197 [-1, 1024, 14, 14] 262,144 BatchNorm2d-198 [-1, 1024, 14, 14] 2,048 ReLU-199 [-1, 1024, 14, 14] 0 Bottleneck-200 [-1, 1024, 14, 14] 0 Conv2d-201 [-1, 256, 14, 14] 262,144 BatchNorm2d-202 [-1, 256, 14, 14] 512 ReLU-203 [-1, 256, 14, 14] 0 Conv2d-204 [-1, 256, 14, 14] 589,824 BatchNorm2d-205 [-1, 256, 14, 14] 512 ReLU-206 [-1, 256, 14, 14] 0 Conv2d-207 [-1, 1024, 14, 14] 262,144 BatchNorm2d-208 [-1, 1024, 14, 14] 2,048 ReLU-209 [-1, 1024, 14, 14] 0 Bottleneck-210 [-1, 1024, 14, 14] 0 Conv2d-211 [-1, 256, 14, 14] 262,144 BatchNorm2d-212 [-1, 256, 14, 14] 512 ReLU-213 [-1, 256, 14, 14] 0 Conv2d-214 [-1, 256, 14, 14] 589,824 BatchNorm2d-215 [-1, 256, 14, 14] 512 ReLU-216 [-1, 256, 14, 14] 0 Conv2d-217 [-1, 1024, 14, 14] 262,144 BatchNorm2d-218 [-1, 1024, 14, 14] 2,048 ReLU-219 [-1, 1024, 14, 14] 0 Bottleneck-220 [-1, 1024, 14, 14] 0 Conv2d-221 [-1, 256, 14, 14] 262,144 BatchNorm2d-222 [-1, 256, 14, 14] 512 ReLU-223 [-1, 256, 14, 14] 0 Conv2d-224 [-1, 256, 14, 14] 589,824 BatchNorm2d-225 [-1, 256, 14, 14] 512 ReLU-226 [-1, 256, 14, 14] 0 Conv2d-227 [-1, 1024, 14, 14] 262,144 BatchNorm2d-228 [-1, 1024, 14, 14] 2,048 ReLU-229 [-1, 1024, 14, 14] 0 Bottleneck-230 [-1, 1024, 14, 14] 0 Conv2d-231 [-1, 256, 14, 14] 262,144 BatchNorm2d-232 [-1, 256, 14, 14] 512 ReLU-233 [-1, 256, 14, 14] 0 Conv2d-234 [-1, 256, 14, 14] 589,824 BatchNorm2d-235 [-1, 256, 14, 14] 512 ReLU-236 [-1, 256, 14, 14] 0 Conv2d-237 [-1, 1024, 14, 14] 262,144 BatchNorm2d-238 [-1, 1024, 14, 14] 2,048 ReLU-239 [-1, 1024, 14, 14] 0 Bottleneck-240 [-1, 1024, 14, 14] 0 Conv2d-241 [-1, 256, 14, 14] 262,144 BatchNorm2d-242 [-1, 256, 14, 14] 512 ReLU-243 [-1, 256, 14, 14] 0 Conv2d-244 [-1, 256, 14, 14] 589,824 BatchNorm2d-245 [-1, 256, 14, 14] 512 ReLU-246 [-1, 256, 14, 14] 0 Conv2d-247 [-1, 1024, 14, 14] 262,144 BatchNorm2d-248 [-1, 1024, 14, 14] 2,048 ReLU-249 [-1, 1024, 14, 14] 0 Bottleneck-250 [-1, 1024, 14, 14] 0 Conv2d-251 [-1, 256, 14, 14] 262,144 BatchNorm2d-252 [-1, 256, 14, 14] 512
MIT
ResNet.ipynb
harshithbelagur/ResNet
Our modeling approach- We fully characterize the existence of a 1-round protocol by a Mixed Integer Linear Program.
from sage.all import * import itertools import random from sage.sat.solvers.satsolver import SAT from sage.sat.solvers.cryptominisat import CryptoMiniSat from sage.misc.temporary_file import atomic_write import copy import time solver = SAT(solver="LP") solver.add_clause((-1,2)) solver.add_clause((1,3)) solution = solver() print ' solution =',solution lst = ['play', 'ground'] print 'ground' in lst m = matrix(QQ, [[1,2],[4,2],[11,3]]) print 'm = ',m x = [] for v in m: x.append(v) x = matrix(QQ, x) print 'x = ',x pr = Permutations(20).random_element() print pr, pr[4], pr[7] v = vector(QQ, [2,3,4]) print 'len = ',len(v) min(5, 44) l = [1,2,3,4,5] print l [2:4] # TODO: check correctness etc def get_at(tup, a, l): """Given 2*l elements flattened into a tuple, pick 1 out of each pair according to array of bits.""" assert len(tup) == 2*l assert len(a) == l for bit in a: assert bit in (0, 1) # tup structure is: $(t_{0,0}, t_{0,1}, t_{1,0}, t_{1,1}, t_{2,0}, t_{2,1})$ return tuple(tup[2*i + a[i]] for i in range(l)) assert get_at((3, 13, 85, 95), (0, 1), 2) == (3, 95) def tuples_fixed_proj(l, proj_ind, proj_val): # h = randint(1,4000) # if (h == 5): # print '==================== DEBUGGING ========== a,g = ', proj_ind, proj_val tuples = list() x = [0 for i in range(2*l)] for i in range(l): x[i*2 + proj_ind[i]] = proj_val[i] for g in itertools.product(range(2), repeat = l): for i in range(l): x[i*2 + 1 - proj_ind[i]] = g[i] # if (h == 5): # print 'modified x to ', x tuples.append(tuple(x[:])) return tuples # Hack from correctness and server security (!) # strength = 0 is consistent with standard LP. strength = 1 attempts to better sort things out def add_no_id_a(solver, p0, q1, strength = 0): if strength == -1: return for i in range(3): for a in itertools.product(range(2), repeat = l): for j in range(3): if (j > i): # this is relaxed if strength == 0: solver.add_constraint(sum(q1[(i, a, zeros, b)] for b in range(2)) + sum(q1[(j, a, zeros, b)] for b in range(2)) <= 1) elif strength == 1: for b1 in range(2): for b2 in range(2): solver.add_constraint(p0[(i, a, zeros, b1)] + p0[(i, a, zeros, b2)] <= 1) # TODO: change the name to something more informative. # Hack #2 from correctness: For every i,a,b with p^i_a > 0, there exists g so that (i,a,g,b) > 0. # This requires some trial and error. Otherwise, one xi at least remains without TO-input options # This will eventually not be part of the client constraints. Currently used to `direct' the soutions. # Another hack from correctness. For all b, there exists at least one value of t, so # that every a reading this g will output b. Thus, the resulting sum of probabilities for # that g is 1. There may be additional g's (the g results from some V assigned to server for each output # value). def get_rand_tup(len): return tuple([sage.misc.prandom.choice(range(2)) for i in range(len)]) # i = -1 , search for an index # i = -2 : auxiliary for relaxed mode # [1,0,0] # [0,1,0] # [0,0,1] # [0,0,0] def safe_insert(val_l, ind_l, (i,ind), v): to_search = [] if i >= 0: to_search = [i] elif i == -1: to_search = [j for j in range(3)] elif i == -2: val_l[ind_l.index(ind)] = v return for j in to_search: if (j,ind) in ind_l: val_l[ind_l.index((j,ind))] = v def find_set_server_vars(l, p0): try: print 'How much is server variable space limited by this solution & correctness?' print 'List = ',p0 p1 = [] for i in range(3): for t in itertools.product(range(2), repeat = 2*l): p1.append((i,t)) for (y,(a,g,b)) in p0: for j in range(3): if ((b == 0) and (i == j)) or ((b == 1) and (i != j)): all = tuples_fixed_proj(l, a, g) for t in all: try: p1.remove((j, t)) except Exception as e: pass print 'number of remaining server values =',len(p1) if (len(p1) > 0): print 'server variables are ',p1 except: 'Uncaught exception!' def permute_rows(m, b): nrows = m.nrows() perm = Permutations(nrows).random_element() pm = [row for row in m] pb = [v for v in b] new_m = matrix(QQ, [pm[perm[i] - 1] for i in range(nrows)]) new_b = vector(QQ, [pb[perm[i] - 1] for i in range(nrows)]) return (new_m, new_b) def get_full_rank_sub(m, b): nrows = m.nrows() new_mat = [] new_b = [] # print 'into get_full_rank_sub',m.nrows(), m.ncols() for (i,row) in enumerate(m): so_far = (matrix(QQ, new_mat)).transpose() try: so_far\row except Exception as e: # print 'at full rank sub Exception = ',e,i new_mat.append(row) new_b.append(b[i]) new_mat = matrix(QQ, new_mat) new_b = vector(new_b) return (new_mat, new_b) def print_matrix(m): if m.ncols < 37: print m return per_row = 30 for row in m: n = ceil(QQ(m.nrows())/per_row) for i in range(n): ran = min(per_row, m.nrows() - i * per_row) print [row[i * per_row + j] for j in range(ran)] print '\n' def non_zeros(m): return sum(sum(1 for j in row if abs(j) != 0) for row in m) def get_inv_sub_matrix(m, b): (new_mat, new_b) = get_full_rank_sub(m, b) ncols = new_mat.ncols() print 'new_mat rank / ',new_mat.rank(),'(',new_mat.nrows(),',',new_mat.ncols(),')' new_mat_tr = new_mat.transpose() b_dum = vector(QQ, [0 for i in range(ncols)]) (full_sub, b_dum) = permute_rows(new_mat_tr, b_dum) (full_sub, b_dum) = get_full_rank_sub(full_sub, b_dum) inv_sub = full_sub.inverse() print 'x = A^{-1}*b', # print full_sub det_A = full_sub.determinant() print '|A| = ', det_A # Finding independent columns print 'Sparsity parameters',non_zeros(full_sub),' => ',non_zeros(inv_sub) x = inv_sub * new_b print 'x = ',x print 'len(x) = ',len(x) if (full_sub.nrows() < 50 and abs(det_A) > 1) or (full_sub.ncols() < 40): print print_matrix(inv_sub) return x def test_linear_solutions_sub(l, t = 3, p = 5): b = [] constraints = [] print 'generating constraints ...' freq = [0,0,0] ag_set = [] a_set = [] for a in itertools.product(range(2), repeat = l): i = sage.misc.prandom.choice(range(p)) if (i <= t): a_set.append(a) for g in itertools.product(range(2), repeat = l): ag_set.append((a,g)) ag_set_len = len(ag_set) print 'a_set = ',a_set for t in itertools.product(range(2), repeat = 2*l): cur_row = [0 for i in range(ag_set_len)] b.append(1) for a in a_set: cur_row[ag_set.index((a, get_at(t,a,l)))] = 1 constraints.append(cur_row) distr = [[0 for j in range(ag_set_len)] for i in range(3)] present = [0,0,0] for a in a_set: y = sage.misc.prandom.choice(range(3)) present[y] = 1 for g in itertools.product(range(2), repeat = l): distr[y][ag_set.index((a,zeros))] = 1 if sum(present[y] for y in range(3)) < 3: print 'Unluckly choice. Aborting execution...' for y in range(3): constraints.append(distr[y]) b.append(1) m_constr = matrix(QQ,constraints) print_matrix(m_constr) right_side = vector(QQ, b) try: v = m_constr\right_side print 'solution = ',v print 'len(solution) = ',len(v) nz = sum(1 for x in v if abs(x) > 0) print 'non-zeros ', nz except Exception as e: print 'Caught exception ',e my_eps = QQ(1/10000) import time for l in range(3, 4): for step in range(5): t0 = time.time() print 'finding a solution for l=',l zeros = tuple((0 for i in range(l))) try: q = test_linear_solutions_sub(l, 4, 5) finally: print 'time =', time.time() - t0,'\n'
finding a solution for l= 3 generating constraints ... a_set = [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] Caught exception matrix equation has no solutions time = 0.0235588550568 finding a solution for l= 3 generating constraints ... a_set = [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] Caught exception matrix equation has no solutions time = 0.0284080505371 finding a solution for l= 3 generating constraints ... a_set = [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] Caught exception matrix equation has no solutions time = 0.0441679954529 finding a solution for l= 3 generating constraints ... a_set = [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1] [0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0] Caught exception matrix equation has no solutions time = 0.0235769748688 finding a solution for l= 3 generating constraints ... a_set = [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1] [1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] Caught exception matrix equation has no solutions time = 0.0199320316315
MIT
3-eq-ILP-lab-mini-suf-submatrices.ipynb
anpc/perfect-reductions
Evaluate, optimize, and fit a classifier BackgroundBefore we can begin making crop/non-crop predictions, we first need to evaluate, optimize, and fit a classifier. Because we are working with spatial data, and because the size of our training dataset is relatively small (by machine learning standards), we need to implement some lesser known methods for evaluating and optimizing our model. These include implementing spatially explicit k-fold cross-validation techniques, running nested k-fold cross validation, and fitting a model on our entire dataset. DescriptionIn this notebook, we will use the training data collected in the first notebook (`1_Extract_training_data.ipynb`) to fit and evaluate a classifier. The steps undertaken are:1. Spatially cluster our training data to visualize spatial groupings2. Calculate an unbiased performance estimate via **nested, k-fold cross-validation**. 3. Optimize the hyperparameters of the model using `GridSearchCV`4. Fit a model on _all_ the data using the parameters identified in step 35. Save the model to disk *** Getting startedTo run this analysis, run all the cells in the notebook, starting with the "Load packages" cell. Load packages
# from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier import os import joblib import numpy as np import pandas as pd from joblib import dump from pprint import pprint import matplotlib.pyplot as plt from odc.io.cgroups import get_cpu_quota from sklearn.model_selection import GridSearchCV, ShuffleSplit, KFold from sklearn.metrics import roc_curve, auc, f1_score, balanced_accuracy_score
_____no_output_____
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
Analysis Parameters* `training_data`: Name and location of the training data `.txt` file output from runnning `1_Extract_training_data.ipynb`* `coordinate_data`: Name and location of the coordinate data `.txt` file output from runnning `1_Extract_training_data.ipynb`* `Classifier`: This parameter refers to the scikit-learn classification model to use, first uncomment the classifier of interest in the `Load Packages` section and then enter the function name into this parameter `e.g. Classifier = RandomForestClassifier` * `metric` : A single string that denotes the scorer used to optimize the model. See the scoring parameter page [here](https://scikit-learn.org/stable/modules/model_evaluation.htmlscoring-parameter) for a pre-defined list of options. For binary classifications, 'F1' or 'balanced_accuracy' are good metrics.* `output_suffix`: A suffix to add to the exported model corresponding to the model iteration.
training_data = "results/training_data/sahel_training_data_20211110.txt" Classifier = RandomForestClassifier metric = 'balanced_accuracy' output_suffix = '20211110'
_____no_output_____
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
K-Fold Cross Validation Parameters* `inner_cv_splits` : Number of cross-validation splits to do on the inner loop, e.g. `5`* `outer_cv_splits` : Number of cross-validation splits to do on the outer loop, e.g. `5`* `test_size` : This will determine what fraction of the dataset will be set aside as the testing dataset. There is a trade-off here between having a larger test set that will help us better determine the quality of our classifier, and leaving enough data to train the classifier. A good deafult is to set 10-20 % of your dataset aside for testing purposes.
inner_cv_splits = 5 outer_cv_splits = 5 test_size = 0.20
_____no_output_____
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
Automatically find the number of cpus
ncpus=round(get_cpu_quota()) print('ncpus = '+str(ncpus))
ncpus = 15
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
Import training and coordinate data
# load the data model_input = np.loadtxt(training_data) # coordinates = np.loadtxt(coordinate_data) # load the column_names with open(training_data, 'r') as file: header = file.readline() column_names = header.split()[1:] # Extract relevant indices from training data model_col_indices = [column_names.index(var_name) for var_name in column_names[1:]] #convert variable names into sci-kit learn nomenclature X = model_input[:, model_col_indices] y = model_input[:, 0]
_____no_output_____
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
Evaluating the classifierNow that we're happy with the spatial clustering, we can evaluate the classifier via _nested_, K-fold cross-validation.The k-fold cross-validation procedure is used to estimate the performance of machine learning models when making predictions on data not used during training. However, when the same cross-validation procedure and dataset are used to both tune the hyperparameters and select a model, it is likely to lead to an optimistically biased evaluation of the model performance.One approach to overcoming this bias is to nest the hyperparameter optimization procedure under the model selection procedure. This is called nested cross-validation and is the preferred way to evaluate and compare tuned machine learning models. _Figure 1: Nested K-Fold Cross Validation_ ***Before evaluating the model, we need to set up some hyperparameters to test during optimization. The `param_grid` object below is set up to test various important hyperparameters for a Random Forest model. > **Note**: the parameters in the `param_grid` object depend on the classifier being used. This notebook is set up for a random forest classifier, to adjust the paramaters to suit a different classifier, look up the important parameters under the relevant [sklearn documentation](https://scikit-learn.org/stable/supervised_learning.html).
# Create the parameter grid param_grid = { 'max_features': ['auto', 'log2', None], 'n_estimators': [150,200,250,300,350,400], 'criterion':['gini', 'entropy'] }
_____no_output_____
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
Now we will conduct the nested CV using the SKCV function. This will take a while to run depending on the number of inner and outer cv splits
outer_cv = KFold(n_splits=outer_cv_splits, shuffle=True, random_state=0) # lists to store results of CV testing acc = [] f1 = [] roc_auc = [] i = 1 for train_index, test_index in outer_cv.split(X, y): print(f"Working on {i}/{outer_cv_splits} outer cv split", end='\r') model = Classifier(random_state=1) # index training, testing, and coordinate data X_tr, X_tt = X[train_index, :], X[test_index, :] y_tr, y_tt = y[train_index], y[test_index] # inner split on data within outer split inner_cv = KFold(n_splits=inner_cv_splits, shuffle=True, random_state=0) clf = GridSearchCV( estimator=model, param_grid=param_grid, scoring=metric, n_jobs=ncpus, refit=True, cv=inner_cv.split(X_tr, y_tr), ) clf.fit(X_tr, y_tr) # predict using the best model best_model = clf.best_estimator_ pred = best_model.predict(X_tt) # evaluate model w/ multiple metrics # ROC AUC probs = best_model.predict_proba(X_tt) probs = probs[:, 1] fpr, tpr, thresholds = roc_curve(y_tt, probs) auc_ = auc(fpr, tpr) roc_auc.append(auc_) # Overall accuracy ac = balanced_accuracy_score(y_tt, pred) acc.append(ac) # F1 scores f1_ = f1_score(y_tt, pred) f1.append(f1_) i += 1
Working on 5/5 outer cv split
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
The results of our model evaluation
print("=== Nested K-Fold Cross-Validation Scores ===") print("Mean balanced accuracy: "+ str(round(np.mean(acc), 2))) print("Std balanced accuracy: "+ str(round(np.std(acc), 2))) print('\n') print("Mean F1: "+ str(round(np.mean(f1), 2))) print("Std F1: "+ str(round(np.std(f1), 2))) print('\n') print("Mean roc_auc: "+ str(round(np.mean(roc_auc), 3))) print("Std roc_auc: "+ str(round(np.std(roc_auc), 2)))
=== Nested K-Fold Cross-Validation Scores === Mean balanced accuracy: 0.82 Std balanced accuracy: 0.01 Mean F1: 0.77 Std F1: 0.01 Mean roc_auc: 0.916 Std roc_auc: 0.01
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
These scores represent a robust estimate of the accuracy of our classifier. However, because we are using only a subset of data to fit and optimize the models, it is reasonable to expect these scores are an under-estimate of the final model's accuracy. Also, the _map_ accuracy will differ from the accuracies reported here since the training data is not a perfect representation of the data in the real world (e.g. if we have purposively over-sampled from hard-to-classify regions, or if the proportions of crop to non-crop do not match the proportions in the real world). Optimize hyperparametersHyperparameter searches are a required process in machine learning. Machine learning models require certain β€œhyperparameters”; model parameters that can be learned from the data. Finding good values for these parameters is a β€œhyperparameter search” or an β€œhyperparameter optimization.”To optimize the parameters in our model, we use [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) to exhaustively search through a set of parameters and determine the combination that will result in the highest accuracy based upon the accuracy metric defined.
#generate n_splits of train-test_split rs = ShuffleSplit(n_splits=outer_cv_splits, test_size=test_size, random_state=0) # #instatiate a gridsearchCV clf = GridSearchCV(Classifier(), param_grid, scoring=metric, verbose=1, cv=rs.split(X, y), n_jobs=ncpus) clf.fit(X, y) print("The most accurate combination of tested parameters is: ") pprint(clf.best_params_) print('\n') print("The "+metric+" score using these parameters is: ") print(round(clf.best_score_, 2))
Fitting 5 folds for each of 36 candidates, totalling 180 fits The most accurate combination of tested parameters is: {'criterion': 'entropy', 'max_features': None, 'n_estimators': 250} The balanced_accuracy score using these parameters is: 0.82
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
Fit a modelUsing the best parameters from our hyperparmeter optmization search, we now fit our model on all the data to give the best possible model.
#create a new model # new_model = Classifier(**clf.best_params_, random_state=1) new_model = Classifier(**{'criterion': 'entropy', 'max_features': None, 'n_estimators': 200}, random_state=1) new_model.fit(X, y)
_____no_output_____
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
Save the modelRunning this cell will export the classifier as a binary`.joblib` file. This will allow for importing the model in the subsequent script, `4_Predict.ipynb`
model_filename = 'results/sahel_ml_model_'+output_suffix+'.joblib' dump(new_model, model_filename)
_____no_output_____
Apache-2.0
testing/sahel_cropmask/3_Train_fit_evaluate_classifier.ipynb
digitalearthafrica/crop-mask
Loading resultsResults have the format: {conf:table_of_results},where **conf** represents the experimental setting and is the tuple (0-remove_stopwords, 1-remove_punctuation, 2-termsim_metric, 3-normalization, 4-termsim_th, 5-synsets_taken, 6-wscheme, 7-sim_metric) There is another configuration for last experiments with did:Configuration format0. Removing stopwords flag : [True, False]1. Removing punctuation flag : [True, False]2. Wordnet similarity metrics : ["path", "lch", "wup", "res", "jcn", "lin"]3. Features extracted to compute similarity : ["token", "lemma", "stem", "lemmapos"]4. Features used to extract synsets : ["token", "lemma", "lemmapos"]5. Information Content used in some WordNet metrics : ["bnc_ic_2007", "bnc_ic_2000", "semcor_ic", "brown_ic"]6. Normalization flag : [True, False]7. Term-Term similarity minimum threslhold : [0.0, 0.25, 0.5, 0.75] 8. Synsets selection strategy (all-vs-all, first) : ["all", "first"]9. Features weighting scheme : ["tf", "binary"]10. Text similarity method : ["mihalcea", "softcosine", "stevenson"] **table_of_results** is a numpy 2d array with scores for each **similarity treshold** in the range **range(0, 101, 5)** sorted by the best accuracy. The **scores** (columns) are: scores = (0-threshold, 1-accuracy, 2-f_measure, 3-precision, 4-recall)
#experiments = pickle.load(open("./Results/results_20170620_complex_normalization.pickle", 'rb')) experiments = pickle.load(open("./Results/results_20170830_normalization_4termsimTh_Rada_Stevenson_Softcosine.pickle", 'rb')) len(experiments) # res = (threshold, accuracy, f_measure, precision, recall) max([(conf, res[0][0], res[0][1], res[0][2]) for conf, res in experiments.items() #if conf[2] == "path" ] , key=lambda x:x[2]) #[(conf, res[0][0], res[0][1]) for conf, res in all_scores.items()] def extract_best_results(experiments, ktop=3, metric="accuracy", wordnet_metric=None, text_metric=None): """Extract best results""" # Maps score names to column index in the numpy 2d array # containing the results for a particular configuration scores_map = {"accuracy":1, "f_measure":2, "precision":3, "recall":4} best_results = np.zeros((ktop*2, 5)) corresponding_confs = np.array([(None, None, None, None, None, None, None, None, None, None, None)]*(ktop*2), dtype=np.dtype('O')) #print("best_results:", best_results.shape) #print("corresponding_confs:", corresponding_confs.shape) #print("Starts\n-------------------") for conf, results in experiments.items(): # Skip text sim metrics distinct to <text_metric> if text_metric and conf[10] != text_metric: continue # Skip wordnet sim metrics distinct to <wordnet_metric> if wordnet_metric and conf[2] != wordnet_metric: continue # Sort by desired score in descending order and select best ktop sorted_results = results[results[:,scores_map[metric]].argsort()[::-1]][:ktop, :] #print("sorted results", sorted_results.shape) # Configuration of results results_conf = np.array([conf]*ktop, dtype=np.dtype('O')) #print("results conf", sorted_results.shape) corresponding_confs[ktop:,:] = results_conf #print("corresponding_confs", corresponding_confs.shape) #print(corresponding_confs) # Vertically stacking best and new results best_results[ktop:,:] = sorted_results[:ktop,:] #print(best_results) # Updating the best results if needed idx = best_results[:,scores_map[metric]].argsort(axis=0)[::-1] best_results = best_results[idx,:] corresponding_confs = corresponding_confs[idx,:] #print("New best results") #print(best_results) #print("New corresponding configurations") #print(corresponding_confs) # *** Declare best_results with twice the size of ktop and do everthing in-place *** # Then compare running times # Choose the new corresponding configurations to the new best results #break #print("\n") return best_results[:ktop,:], corresponding_confs[:ktop,:] bres, bconf = extract_best_results(experiments, ktop=3, metric="accuracy", wordnet_metric=None, text_metric=None) print(bres) print() print(bconf)
[[0.7 0.74092247 0.81886792 0.77575561 0.86705412] [0.7 0.74092247 0.81886792 0.77575561 0.86705412] [0.7 0.74018646 0.81875749 0.77411003 0.86887032]] [[False True 'lin' 'lemma' 'lemma' 'bnc_ic_2007' False 0.5 'all' 'binary' 'mihalcea'] [False True 'lin' 'lemma' 'lemma' 'bnc_ic_2007' False 0.5 'all' 'tf' 'mihalcea'] [False True 'lin' 'stem' 'token' 'bnc_ic_2007' False 0.5 'all' 'tf' 'mihalcea']]
MIT
SoftCosine_Results_Analysis.ipynb
CubasMike/wordnet_paraphrase
Getting some resultsIn this cell we get the best results for each text similarity metric and 3-top best wordnet metrics
my_results = [] for text_metric in ["mihalcea", "softcosine", "stevenson"]: for wordnet_metric in ["path", "lch", "wup", "res", "jcn", "lin"]: print("------------------------------------------------------") print(text_metric, wordnet_metric) res, conf = extract_best_results(experiments, ktop=1, metric="accuracy", wordnet_metric=wordnet_metric, text_metric=text_metric) print(conf) print(res) print("\n") my_results.append(np.concatenate((conf[0,:], res[0,0:3]), axis=-1)) res = np.array(sorted(my_results, key=lambda x:x[-2], reverse=True))
_____no_output_____
MIT
SoftCosine_Results_Analysis.ipynb
CubasMike/wordnet_paraphrase
Saving results to latex table format 0. Removing stopwords flag : [True, False]1. Removing punctuation flag : [True, False]2. Wordnet similarity metrics : ["path", "lch", "wup", "res", "jcn", "lin"]3. Features extracted to compute similarity : ["token", "lemma", "stem", "lemmapos"]4. Features used to extract synsets : ["token", "lemma", "stem", "lemmapos"]5. Information Content used in some WordNet metrics : ["bnc_ic_2007", "bnc_ic_2000", "semcor_ic", "brown_ic"]6. Normalization flag : [True, False]7. Term-Term similarity minimum threslhold : [0.0, 0.25, 0.5, 0.75] 8. Synsets selection strategy (all-vs-all, first) : ["all", "first"]9. Features weighting scheme : ["tf", "binary"]10. Text similarity method : ["mihalcea", "softcosine", "stevenson"]11. Text similarity threshold : Real12. Accuracy : Real13. F1-score : Real
ic_map = {None: "", "bnc_ic_2007": "bnc07", "bnc_ic_2000": "bnc00", "semcor_ic":"semcor", "brown_ic":"brown"} ss_map = {"first":"1", "all":"n"} ws_map = {"tf":"tf", "binary":"bin"} with open("latex_out.txt", "w") as fid: for row in res: line = "{0} & {1} & {3} & {9} & {4} & {8} & {2} & {5} & {6} & {7:.2f} & {10} & {11:.2f} & {12:.3f} & {13:.3f}".format( r"\checkmark" if row[0] else "", r"\checkmark" if row[1] else "", row[2], row[3], row[4], ic_map[row[5]], r"\checkmark" if row[6] else "", row[7], ss_map[row[8]], ws_map[row[9]], row[10], row[11], row[12], row[13] )+"\\\\\n" #line = " & ".join([str(x) for x in row]) +"\\\\\n" fid.write(line) test_configurations = [] for row in res: test_configurations.append(tuple(row[:-3])) with open("test_configurations.pickle", "wb") as fid: pickle.dump(test_configurations, fid)
_____no_output_____
MIT
SoftCosine_Results_Analysis.ipynb
CubasMike/wordnet_paraphrase
Analyzing test scores
test_scores = pickle.load(open("./Results/test_results_20180504.pickle","rb")) test_scores ic_map = {None: "", "bnc_ic_2007": "bnc07", "bnc_ic_2000": "bnc00", "semcor_ic":"semcor", "brown_ic":"brown"} ss_map = {"first":"1", "all":"n"} ws_map = {"tf":"tf", "binary":"bin"} new_scores = [] for old_row in res: conf = tuple(old_row[:-3]) scores_mat = test_scores[conf] scores = [x for x in scores_mat if x[0] == old_row[-3]][0] new_scores.append((conf[-1], conf[2], scores[1], scores[2])) new_scores.sort(key=lambda x:x[2], reverse=True) with open("latex_test_out.txt", "w") as fid: for row in new_scores: line = "{0} & {1} & {2:.3f} & {3:.3f} \\\\\n".format(*row) fid.write(line)
_____no_output_____
MIT
SoftCosine_Results_Analysis.ipynb
CubasMike/wordnet_paraphrase
Other example with chest information Training Prepare function to calculate feature vector from 3D to [1D x number_of_features]
def localization_fv2(data3d, voxelsize_mm): # scale fv = [] # f0 = scipy.ndimage.filters.gaussian_filter(data3d, sigma=3).reshape(-1, 1) #f1 = scipy.ndimage.filters.gaussian_filter(data3dr, sigma=1).reshape(-1, 1) - f0 #f2 = scipy.ndimage.filters.gaussian_filter(data3dr, sigma=5).reshape(-1, 1) - f0 #f3 = scipy.ndimage.filters.gaussian_filter(data3dr, sigma=10).reshape(-1, 1) - f0 #f4 = scipy.ndimage.filters.gaussian_filter(data3dr, sigma=20).reshape(-1, 1) - f0 # position asdfas import bodynavigation as bn ss = bn.BodyNavigation(data3d, voxelsize_mm) fd1 = ss.dist_to_lungs().reshape(-1, 1) fd2 = ss.dist_to_spine().reshape(-1, 1) fd3 = ss.dist_sagittal().reshape(-1, 1) fd4 = ss.dist_coronal().reshape(-1, 1) fd5 = ss.dist_axial().reshape(-1, 1) fd6 = ss.dist_to_surface().reshape(-1, 1) fd7 = ss.dist_diaphragm().reshape(-1, 1) fd8 = ss.dist_to_chest().reshape(-1, 1) # f6 = scipy.ndimage.filters.gaussian_filter(data3d, sigma=[20, 1, 1]).reshape(-1, 1) - f0 # f7 = scipy.ndimage.filters.gaussian_filter(data3d, sigma=[1, 20, 1]).reshape(-1, 1) - f0 # f8 = scipy.ndimage.filters.gaussian_filter(data3d, sigma=[1, 1, 20]).reshape(-1, 1) - f0 # print "fv shapes ", f0.shape, fd2.shape, fd3.shape fv = np.concatenate([ # f0, # f1, f2, f3, f4, fd1, fd2, fd3, fd4, fd5, fd6, fd7, #f6, f7, f8 ], 1) return fv import imtools.trainer3d import imtools.datasets ol = imtools.trainer3d.Trainer3D() ol.feature_function = localization_fv2 for one in imtools.datasets.sliver_reader( "*[0-2].mhd", read_seg=True, sliver_reference_dir=sliver_reference_dir): numeric_label, vs_mm, oname, orig_data, rname, ref_data = one ol.add_train_data(orig_data, ref_data, voxelsize_mm=vs_mm) ol.fit()
/Users/mjirik/miniconda/lib/python2.7/site-packages/numpy/core/numeric.py:190: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future a = empty(shape, dtype, order) /Users/mjirik/miniconda/lib/python2.7/site-packages/skimage/morphology/misc.py:122: UserWarning: Only one label was provided to `remove_small_objects`. Did you mean to use a boolean array? warn("Only one label was provided to `remove_small_objects`. " /Users/mjirik/miniconda/lib/python2.7/site-packages/scipy/ndimage/interpolation.py:568: UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed. "the returned array has changed.", UserWarning)
MIT
examples/liver_localization_training_with_chest.ipynb
mjirik/bodynavigation
Testing
one = list(imtools.datasets.sliver_reader("*019.mhd", read_seg=True))[0] numeric_label, vs_mm, oname, orig_data, rname, ref_data = one fit = ol.predict(orig_data, voxelsize_mm=vs_mm) plt.figure(figsize=(15,10)) sed3.show_slices(orig_data, fit, slice_step=20, axis=1, flipV=True) import lisa.volumetry_evaluation lisa.volumetry_evaluation.compare_volumes_sliver(ref_data, fit, vs_mm)
_____no_output_____
MIT
examples/liver_localization_training_with_chest.ipynb
mjirik/bodynavigation
Fitting the parameters
from scipy import optimize from scipy import integrate ydata = np.array(df_analyse.Germany[35:]) #90 time = np.arange(len(ydata)) I0 = ydata[0] S0 = 1000000 R0 = 0 beta print(I0) def SIR_model_fit(SIR, time, beta, gamma): S,I,R = SIR dS = -beta * S * I/N0 dI = beta * S * I/N0 - gamma * I dR = gamma * I return([dS, dI, dR]) def fit_odeint(x,beta,gamma): return integrate.odeint(SIR_model_fit, (S0,I0,R0), time, args=(beta, gamma))[:,1] # [,:1] infected rate # Integrate popt = [0.4, 0.1] #beta, gamma fit_odeint(time, *popt) popt, pcov = optimize.curve_fit(fit_odeint, time, ydata) perr = np.sqrt(np.diag(pcov)) print('Standard deviation errors : ', str(perr), 'Infection Start : ', ydata[0]) fitted = fit_odeint(time, *popt) plt.semilogy(time, ydata, 'o') plt.semilogy(time, fitted) plt.title('SIR model for Germany') plt.ylabel('Number of infected people') plt.xlabel('Days') plt.show() print('Optimal Parameters : beta = ', popt[0], 'gamma = ', popt[1]) print('Reproduction number, R0 : ', popt[0]/popt[1])
_____no_output_____
MIT
notebooks/SIR_modeling/.ipynb_checkpoints/0_SIR_modeling_intro-checkpoint.ipynb
ebinzacharias/ads_COVID-19
Dynamic Beta
t_initial = 25 t_intro_measures = 14 t_hold = 21 t_relax = 21 beta_max = 0.4 beta_min = 0.11 gamma = 0.1 pd_beta = np.concatenate((np.array(t_initial*[beta_max]), np.linspace(beta_max, beta_min, t_intro_measures), np.array(t_hold * [beta_min]), np.linspace(beta_min, beta_max, t_relax) )) pd_beta SIR = np.array([S0,I0,R0]) propagation_rates = pd.DataFrame(columns={'Susceptible':S0, 'Infected':I0, 'Recovered':R0 }) for each_beta in pd_beta: new_delta_vector = SIR_model(SIR, each_beta, gamma) SIR = SIR + new_delta_vector propagation_rates = propagation_rates.append({'Susceptible':SIR[0], 'Infected':SIR[1], 'Recovered':SIR[2], },ignore_index=True ) fig, ax1 = plt.subplots(1,1) ax1.plot(propagation_rates.index, propagation_rates.Infected, label = 'Infected', linewidth = 3) #ax1.plot(propagation_rates.index, propagation_rates.Recovered, label = 'Recovered') #ax1.plot(propagation_rates.index, propagation_rates.Susceptible, label = 'Susceptible') ax1.bar(np.arange(len(ydata)), ydata, width=2, label = 'Actual cases in Germany', color = 'r') t_phases = np.array([t_initial, t_intro_measures, t_hold, t_relax]).cumsum() ax1.axvspan(0, t_phases[0], facecolor='b', alpha=0.2, label="No Measures") ax1.axvspan(t_phases[0], t_phases[1], facecolor='b', alpha=0.3, label="Hard Measures") ax1.axvspan(t_phases[1], t_phases[2], facecolor='b', alpha=0.4, label="Holding Measures") ax1.axvspan(t_phases[2], t_phases[3], facecolor='b', alpha=0.5, label="Relaxed Measures") ax1.axvspan(t_phases[3], len(propagation_rates.Infected),facecolor='b', alpha=0.6, label="Hard Measures Again") ax1.set_ylim(10,1.5*max(propagation_rates.Infected)) #ax1.set_xlim(0,100) ax1.set_yscale('log') ax1.set_title('SIR Simulation', size= 16) ax1.set_xlabel('Number of days', size=16) ax1.legend(loc='best', prop={'size':16})
_____no_output_____
MIT
notebooks/SIR_modeling/.ipynb_checkpoints/0_SIR_modeling_intro-checkpoint.ipynb
ebinzacharias/ads_COVID-19
Imports
import json import re import string import scipy import matplotlib.pyplot as plt import numpy as np from tqdm import tqdm_notebook as tqdm from nltk.sentiment.util import mark_negation from nltk import wordpunct_tokenize from nltk.tokenize import word_tokenize from nltk.corpus import stopwords from nltk.stem.snowball import SnowballStemmer from sklearn.linear_model import LinearRegression,SGDClassifier,ElasticNet,LogisticRegression from sklearn.ensemble import GradientBoostingClassifier,VotingClassifier from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score,f1_score,mean_squared_error,confusion_matrix from sklearn.feature_extraction.text import TfidfVectorizer,CountVectorizer
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Constants
train_path = 'data/train.json' dev_path = 'data/dev.json' translator = str.maketrans("","", string.punctuation) # stemmer = SnowballStemmer("english", ignore_stopwords=True)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Function Defs
def read_file(path): data_X = [] data_Y = [] with open(path, 'r') as data_file: line = data_file.readline() while line: data = json.loads(line) data_X.append(data['review']) data_Y.append(data['ratings']) line = data_file.readline() return data_X,data_Y def get_metrics_from_pred(y_pred,y_true): mse = mean_squared_error(y_pred,y_true) try: f1_scor = f1_score(y_true, y_pred, average='weighted') acc = accuracy_score(y_true, y_pred) conf_matrix = confusion_matrix(y_true,y_pred) except: y_pred = np.round(y_pred) f1_scor = f1_score(y_true, y_pred, average='weighted') acc = accuracy_score(y_true, y_pred) conf_matrix = confusion_matrix(y_true,y_pred) print("MSE = ",mse," F1 = ",f1_scor," Accuracy = ",acc) plt.matshow(conf_matrix) plt.colorbar() def get_metrics(model,X,y_true): y_pred = model.predict(X) get_metrics_from_pred(y_pred,y_true) def get_metrics_using_probs(model,X,y_true): y_pred = model.predict_proba(X) y_pred = np.average(y_pred,axis=1, weights=[1,2,3,4,5])*15 get_metrics_from_pred(y_pred,y_true) def remove_repeats(sentence): pattern = re.compile(r"(.)\1{2,}") return pattern.sub(r"\1\1", sentence) def tokenizer1(sentence): sentence = sentence.translate(translator) # Remove punctuations sentence = sentence.lower() # Convert to lowercase sentence = re.sub(r'\d+', '', sentence) # Remove Numbers sentence = remove_repeats(sentence) # Remove repeated characters # sentence = sentence.strip() # Remove Whitespaces tokens = wordpunct_tokenize(sentence) # Tokenize # tokens = word_tokenize(sentence) # Tokenize # for i in range(len(tokens)): # Stem word # tokens[i] = stemmer.stem(tokens[i]) return tokens # emoticon_string = r""" # (?: # [<>]? # [:;=8] # eyes # [\-o\*\']? # optional nose # [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth # | # [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth # [\-o\*\']? # optional nose # [:;=8] # eyes # [<>]? # )""" # # The components of the tokenizer: # regex_strings = ( # # Phone numbers: # r""" # (?: # (?: # (international) # \+?[01] # [\-\s.]* # )? # (?: # (area code) # [\(]? # \d{3} # [\-\s.\)]* # )? # \d{3} # exchange # [\-\s.]* # \d{4} # base # )""" # , # # Emoticons: # emoticon_string # , # # HTML tags: # r"""<[^>]+>""" # , # # Twitter username: # r"""(?:@[\w_]+)""" # , # # Twitter hashtags: # r"""(?:\#+[\w_]+[\w\'_\-]*[\w_]+)""" # , # # Remaining word types: # r""" # (?:[a-z][a-z'\-_]+[a-z]) # Words with apostrophes or dashes. # | # (?:[+\-]?\d+[,/.:-]\d+[+\-]?) # Numbers, including fractions, decimals. # | # (?:[\w_]+) # Words without apostrophes or dashes. # | # (?:\.(?:\s*\.){1,}) # Ellipsis dots. # | # (?:\S) # Everything else that isn't whitespace. # """ # ) # ###################################################################### # # This is the core tokenizing regex: # word_re = re.compile(r"""(%s)""" % "|".join(regex_strings), re.VERBOSE | re.I | re.UNICODE) # # The emoticon string gets its own regex so that we can preserve case for them as needed: # emoticon_re = re.compile(regex_strings[1], re.VERBOSE | re.I | re.UNICODE) # # These are for regularizing HTML entities to Unicode: # html_entity_digit_re = re.compile(r"&#\d+;") # html_entity_alpha_re = re.compile(r"&\w+;") # amp = "&amp;" # negation_re = re.compile(r""" # never|no|nothing|nowhere|noone|none|not| # havent|hasnt|hadnt|cant|couldnt|shouldnt| # wont|wouldnt|dont|doesnt|didnt|isnt|arent|aint| # n't| # haven't|hasn't|hadn't|can't|couldn't|shouldn't| # won't|wouldn't|don't|doesn't|didn't|isn't|aren't|ain't # """,re.VERBOSE ) # clause_level_re = re.compile(r"""^[.:;!?]$""",re.VERBOSE ) # ###################################################################### # class Tokenizer: # def __init__(self, preserve_case=False): # self.preserve_case = preserve_case # def tokenize(self, s): # """ # Argument: s -- any string or unicode object # Value: a tokenize list of strings; conatenating this list returns the original string if preserve_case=False # """ # # Try to ensure unicode: # # try: # # s = unicode(s) # # except UnicodeDecodeError: # # s = str(s).encode('string_escape') # # s = unicode(s) # # Fix HTML character entitites: # # Tokenize: # words = word_re.findall(s) # # Possible alter the case, but avoid changing emoticons like :D into :d: # if not self.preserve_case: # words = list(map((lambda x : x if emoticon_re.search(x) else x.lower()), words)) # # negator = False # # for i in range(len(words)): # # word = words[i] # # if(negation_re.match(word)): # # negator = !negator # # elif(clause_level_re.match(word)): # # negator = False # # elif(negator): # # words[i] = word+"_NEG" # return words # tok = Tokenizer().tokenize tokenize = tokenizer1 # tokenize = tok # for i in tqdm(range(len(X_train))): # tokenize(X_train[i]) # for i in range(200,600): # print(tokenize(X_train[i])) X_train,Y_train = read_file(train_path) X_dev,Y_dev = read_file(dev_path) # processed_stopwords = [] # for word in stopwords.words('english'): # processed_stopwords += tokenize(word) # # print(processed_stopwords) # vectorizer = TfidfVectorizer(strip_accents='ascii', # lowercase=True, # tokenizer=tokenize, # stop_words=processed_stopwords, # ngram_range=(1,1), # binary=True, # norm='l2', # analyzer='word') # vectorizer = TfidfVectorizer(binary=True,tokenizer=tokenize) # vectorizer = TfidfVectorizer(tokenizer=tokenize) vectorizer = TfidfVectorizer(tokenizer=tokenize,ngram_range=(1,2)) # vectorizer = CountVectorizer(tokenizer=tokenize,ngram_range=(1,2)) X_train_counts = vectorizer.fit_transform(X_train) X_dev_counts = vectorizer.transform(X_dev) # print(X_train_counts) # from sklearn import preprocessing # scaler = preprocessing.StandardScaler(with_mean=False).fit(X_train_counts) # X_train_counts = scaler.transform(X_train_counts) # X_dev_counts = scaler.transform(X_dev_counts) # print(X_train_counts)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
* Try Removeding whole numbers* Try seperating number and text* Try replacing 000ps by ooops* Try removing repeated characters like sssslllleeeepppp. Baseline
# all_5 = list(5*np.ones([len(Y_dev),])) # get_metrics_from_pred(all_5,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Trying Multinomial Naive Bayes
# model = MultinomialNB() # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev) # get_metrics_using_probs(model,X_dev_counts,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Trying Logistic Regression
model = LogisticRegression(verbose=1,n_jobs=7,solver='sag',multi_class='ovr') model.fit(X_train_counts,Y_train) get_metrics(model,X_dev_counts,Y_dev) get_metrics_using_probs(model,X_dev_counts,Y_dev) # model = LogisticRegression(verbose=1,n_jobs=7,class_weight='balanced',multi_class='ovr',solver='liblinear') # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev) # get_metrics_using_probs(model,X_dev_counts,Y_dev) # model = LogisticRegression(verbose=1,n_jobs=7,class_weight='balanced',multi_class='multinomial',solver='lbfgs') # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev) # get_metrics_using_probs(model,X_dev_counts,Y_dev) # model = LogisticRegression(verbose=1,n_jobs=7,class_weight='balanced',multi_class='ovr',solver='liblinear',penalty='l1') # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev) # get_metrics_using_probs(model,X_dev_counts,Y_dev) # model = LogisticRegression(verbose=1,n_jobs=7,class_weight='balanced',multi_class='multinomial',solver='saga',penalty='l1') # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev) # get_metrics_using_probs(model,X_dev_counts,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Linear Regression
# model = LinearRegression(n_jobs=7) # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
SGD Classifier
# model = SGDClassifier(n_jobs=7,verbose=True) # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
ElasticNet
# model = ElasticNet() # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
GradientBoostingClassifier
# model = GradientBoostingClassifier(verbose=True) # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Complicated Model ( Tree with two branches 1-3 and 4-5)
indices = np.where(list(map(lambda x:x>3,Y_train)))[0] X_train_counts_4_5 = X_train_counts[indices] Y_train_4_5 = [Y_train[j] for j in indices] indices = np.where(list(map(lambda x:x<=3,Y_train)))[0] X_train_counts_1_3 = X_train_counts[indices] Y_train_1_3 = [Y_train[j] for j in indices] Y_modified = list(map(lambda x:int(x>3),Y_train)) model1 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model1.fit(X_train_counts,Y_modified) model2 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model2.fit(X_train_counts_4_5,Y_train_4_5) model3 = LogisticRegression(verbose=1,n_jobs=7,solver='sag',multi_class='ovr') model3.fit(X_train_counts_1_3,Y_train_1_3) pred1 = model1.predict(X_dev_counts) pred2 = model2.predict_proba(X_dev_counts) pred3 = model3.predict_proba(X_dev_counts) pred = [] for i in tqdm(range(len(pred1))): if(pred1[i] == 1): pred.append(pred2[i][0]*4.0 + pred2[i][1]*5.0) else: pred.append(pred3[i][0]*1.0 + pred3[i][1]*2.0 + pred3[i][2]*3.0) get_metrics_from_pred(pred,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Another Try (Tree with negative ,neutral and positive review)
indices = np.where(list(map(lambda x:x>3,Y_train)))[0] X_train_counts_4_5 = X_train_counts[indices] Y_train_4_5 = [Y_train[j] for j in indices] indices = np.where(list(map(lambda x:x<3,Y_train)))[0] X_train_counts_1_2 = X_train_counts[indices] Y_train_1_2 = [Y_train[j] for j in indices] indices = np.where(list(map(lambda x:x==3,Y_train)))[0] X_train_counts_3 = X_train_counts[indices] Y_train_3 = [Y_train[j] for j in indices] def modif(x): if (x==3): return 1 elif(x>3): return 2 else: return 0 Y_modified = list(map(lambda x: modif(x),Y_train)) model1 = LogisticRegression(verbose=1,n_jobs=7,solver='sag',multi_class='ovr') model1.fit(X_train_counts,Y_modified) model2 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model2.fit(X_train_counts_4_5,Y_train_4_5) model3 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model3.fit(X_train_counts_1_2,Y_train_1_2) pred1 = model1.predict(X_dev_counts) pred1_p = model1.predict_proba(X_dev_counts) pred2 = model2.predict_proba(X_dev_counts) pred3 = model3.predict_proba(X_dev_counts) pred = [] for i in tqdm(range(len(pred1))): if(pred1[i] == 0): pred.append(pred3[i][0]*1.0 + pred3[i][1]*2.0) elif(pred1[i] == 1): pred.append(pred1_p[i][0]*1.5 + pred1_p[i][1]*3 + pred1_p[i][2]*4.5) elif(pred1[i] == 2): pred.append(pred2[i][0]*4.0 + pred2[i][1]*5.0) get_metrics_from_pred(pred,Y_dev) pred_n_3_p = model1.predict_proba(X_dev_counts) pred_4_5 = model2.predict_proba(X_dev_counts) pred_1_2 = model3.predict_proba(X_dev_counts) pred = [] for i in tqdm(range(len(pred1))): pred.append(pred_n_3_p[i][0]*pred_1_2[i][0]*1.0 + pred_n_3_p[i][0]*pred_1_2[i][1]*2.0 + pred_n_3_p[i][1]*3.0 + pred_n_3_p[i][2]*pred_4_5[i][0]*4.0 + pred_n_3_p[i][2]*pred_4_5[i][1]*5.0) get_metrics_from_pred(pred,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Voting Classifier (With simple ovr logistive regression and multinomial naive bayes)
# m1 = LogisticRegression(verbose=1,n_jobs=7,solver='sag',multi_class='ovr') # m2 = MultinomialNB() # model = VotingClassifier(estimators=[('lr', m1),('gnb', m2)],voting='soft') # model.fit(X_train_counts,Y_train) # get_metrics(model,X_dev_counts,Y_dev) # get_metrics_using_probs(model,X_dev_counts,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Binary Logistics Everywhere (Tree with base classified as neutral or 1-3&4-5)
indices = np.where(list(map(lambda x: x!=3,Y_train)))[0] X_train_counts_p_n = X_train_counts[indices] Y_train_p_n = [1 if Y_train[j]>3 else 0 for j in indices] indices = np.where(list(map(lambda x:x>3,Y_train)))[0] X_train_counts_4_5 = X_train_counts[indices] Y_train_4_5 = [Y_train[j] for j in indices] indices = np.where(list(map(lambda x:x<3,Y_train)))[0] X_train_counts_1_2 = X_train_counts[indices] Y_train_1_2 = [Y_train[j] for j in indices] def modif(x): if (x==3): return 1 else: return 0 Y_modified = list(map(lambda x: modif(x),Y_train)) model_neutral = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model_neutral.fit(X_train_counts,Y_modified) model_n_p = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model_n_p.fit(X_train_counts_p_n,Y_train_p_n) model_4_5 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model_4_5.fit(X_train_counts_4_5,Y_train_4_5) model_1_2 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model_1_2.fit(X_train_counts_1_2,Y_train_1_2) pred_neutral = model_neutral.predict_proba(X_dev_counts) pred_n_p = model_n_p.predict_proba(X_dev_counts) pred_1_2 = model_1_2.predict_proba(X_dev_counts) pred_4_5 = model_4_5.predict_proba(X_dev_counts) pred = [] for i in tqdm(range(len(pred_neutral))): pred.append(pred_neutral[i][1]*3.0 + pred_neutral[i][0]*pred_n_p[i][0]*pred_1_2[i][0]*1.0 + pred_neutral[i][0]*pred_n_p[i][0]*pred_1_2[i][1]*2.0+ pred_neutral[i][0]*pred_n_p[i][1]*pred_4_5[i][0]*4.0+ pred_neutral[i][0]*pred_n_p[i][1]*pred_4_5[i][1]*5.0) get_metrics_from_pred(pred,Y_dev) pred_neutral_c = model_neutral.predict(X_dev_counts) pred_neutral = model_neutral.predict_proba(X_dev_counts) pred_n_p_c = model_n_p.predict(X_dev_counts) pred_n_p = model_n_p.predict_proba(X_dev_counts) pred_1_2_c = model_1_2.predict(X_dev_counts) pred_1_2 = model_1_2.predict_proba(X_dev_counts) pred_4_5_c = model_4_5.predict(X_dev_counts) pred_4_5 = model_4_5.predict_proba(X_dev_counts) pred = [] for i in tqdm(range(len(pred_neutral))): if(pred_neutral_c[i] == 1): pred.append(3) else: if(pred_n_p_c[i] == 0): pred.append(pred_1_2_c[i]) else: pred.append(pred_4_5_c[i]) get_metrics_from_pred(pred,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772
Another Try (Full tree leaning towards 1)
indices = np.where(list(map(lambda x: x<=3,Y_train)))[0] X_train_counts_12_3 = X_train_counts[indices] Y_train_12_3 = [1 if Y_train[j]==3 else 0 for j in indices] indices = np.where(list(map(lambda x:x>3,Y_train)))[0] X_train_counts_4_5 = X_train_counts[indices] Y_train_4_5 = [Y_train[j] for j in indices] indices = np.where(list(map(lambda x:x<3,Y_train)))[0] X_train_counts_1_2 = X_train_counts[indices] Y_train_1_2 = [Y_train[j] for j in indices] def modif(x): if (x>3): return 1 else: return 0 Y_modified = list(map(lambda x: modif(x),Y_train)) model_123_45 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model_123_45.fit(X_train_counts,Y_modified) model_4_5 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model_4_5.fit(X_train_counts_4_5,Y_train_4_5) model_12_3 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model_12_3.fit(X_train_counts_12_3,Y_train_12_3) model_1_2 = LogisticRegression(verbose=1,n_jobs=7,solver='sag') model_1_2.fit(X_train_counts_1_2,Y_train_1_2) pred_123_45 = model_123_45.predict_proba(X_dev_counts) pred_12_3 = model_12_3.predict_proba(X_dev_counts) pred_1_2 = model_1_2.predict_proba(X_dev_counts) pred_4_5 = model_4_5.predict_proba(X_dev_counts) pred = [] for i in tqdm(range(len(pred_neutral))): pred.append(pred_123_45[i][0]*pred_12_3[i][0]*pred_1_2[i][0]*1.0+ pred_123_45[i][0]*pred_12_3[i][0]*pred_1_2[i][1]*2.0+ pred_123_45[i][0]*pred_12_3[i][1]*3.0+ pred_123_45[i][1]*pred_4_5[i][0]*4.0+ pred_123_45[i][1]*pred_4_5[i][1]*5.0) get_metrics_from_pred(pred,Y_dev)
_____no_output_____
MIT
A1_part_1/Non Pipelined Tester.ipynb
ankurshaswat/COL772