markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Filtering (cutting) events and particles with advanced selectionsNumPy has a versatile selection mechanism:The same expressions apply to Awkward Arrays, and more.
# First particle momentum in the first 5 events events.prt.p[:5, 0] # First two particles in all events events.prt.pdg[:, :2] # First direction of the last event events.prt.dir[-1, 0]
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
NumPy also lets you filter (cut) using an array of booleans.
events.prt_count > 100 np.count_nonzero(events.prt_count > 100) events[events.prt_count > 100]
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
One dimension can be selected with an array while another is selected with a slice.
# Select events with at least two particles, then select the first two particles events.prt[events.prt_count >= 2, :2]
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
This can be a good way to avoid errors from trying to select what isn't there.
try: events.prt[:, 0] except Exception as err: print(type(err).__name__, str(err)) events.prt[events.prt_count > 0, 0]
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
See also [awkward-array.readthedocs.io](https://awkward-array.readthedocs.io/) for a list of operations like [ak.num](https://awkward-array.readthedocs.io/en/latest/_auto/ak.num.html):
?ak.num ak.num(events.prt), events.prt_count
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
You can even use an array of integers to select a set of indexes at once.
# First and last particle in each event that has at least two events.prt.pdg[ak.num(events.prt) >= 2][:, [0, -1]]
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
But beyond NumPy, we can also use arrays of nested lists as boolean or integer selectors.
# Array of lists of True and False abs(events.prt.vtx) > 0.10 # Particles that have vtx > 0.10 for all events (notice that there's still 10000) events.prt[abs(events.prt.vtx) > 0.10]
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
See [awkward-array.readthedocs.io](https://awkward-array.readthedocs.io/) for more, but there are functions like [ak.max](https://awkward-array.readthedocs.io/en/latest/_auto/ak.max.html), which picks the maximum in a groups. * With `axis=0`, the group is the set of all events. * With `axis=1`, the groups are particles in each event.
?ak.max ak.max(abs(events.prt.vtx), axis=1) # Selects *events* that have a maximum *particle vertex* greater than 100 events[ak.max(abs(events.prt.vtx), axis=1) > 100]
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
The difference between "select particles" and "select events" is the number of jagged dimensions in the array; "reducers" like ak.max reduce the dimensionality by one.There are other reducers like ak.any, ak.all, ak.sum...
?ak.sum # Is this particle an antineutron? events.prt.pdg == Particle.from_string("n~").pdgid # Are any particles in the event antineutrons? ak.any(events.prt.pdg == Particle.from_string("n~").pdgid, axis=1) # Select events that contain an antineutron events[ak.any(events.prt.pdg == Particle.from_string("n~").pdgid, axis=1)]
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
We can use these techniques to make subcollections for specific particle types and attach them to the same `events` array for easy access.
events.prt[abs(events.prt.pdg) == abs(Particle.from_string("p").pdgid)] # Assignments have to be through __setitem__ (brackets), not __setattr__ (as an attribute). # Is that a problem? (Assigning as an attribute would have to be implemented with care, if at all.) events["pions"] = events.prt[abs(events.prt.pdg) == abs(Particle.from_string("pi+").pdgid)] events["kaons"] = events.prt[abs(events.prt.pdg) == abs(Particle.from_string("K+").pdgid)] events["protons"] = events.prt[abs(events.prt.pdg) == abs(Particle.from_string("p").pdgid)] events.pions events.kaons events.protons ak.num(events.prt), ak.num(events.pions), ak.num(events.kaons), ak.num(events.protons)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Flattening for plots and regularizing to NumPy for machine learningAll of this structure is great, but eventually, we need to plot the data or ship it to some statistical process, such as machine learning.Most of these tools know about NumPy arrays and rectilinear data, but not Awkward Arrays. As a design choice, Awkward Array **does not implicitly flatten**; you need to do this yourself, and you might make different choices of how to apply this lossy transformation in different circumstances.The basic tool for removing structure is [ak.flatten](https://awkward-array.readthedocs.io/en/latest/_auto/ak.flatten.html).
?ak.flatten # Turn particles-grouped-by-event into one big array of particles ak.flatten(events.prt, axis=1) # Eliminate structure at all levels; produce one numerical array ak.flatten(events.prt, axis=None)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
For plotting, you probably want to pick one field and flatten it. Flattening with `axis=1` (the default) works for one level of structure and is safer than `axis=None`.The flattening is explicit as a reminder that a histogram whose entries are particles is different from a histogram whose entries are events.
# Directly through Matplotlib plt.hist(ak.flatten(events.kaons.p), bins=100, range=(0, 10)) # Through mplhep and boost-histgram, which are more HEP-friendly hep.histplot(bh.Histogram(bh.axis.Regular(100, 0, 10)).fill( ak.flatten(events.kaons.p) ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
If the particles are sorted (`ak.sort`/`ak.argsort` is [in development](https://github.com/scikit-hep/awkward-1.0/pull/168)), you might want to pick the first kaon from every event that has them (i.e. *use* the event structure).This is an analysis choice: *you* have to decide you want this.The `ak.num(events.kaons) > 0` selection is explicit as a reminder that empty events are not counted in the histogram.
hep.histplot(bh.Histogram(bh.axis.Regular(100, 0, 10)).fill( events.kaons.p[ak.num(events.kaons) > 0, 0] ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Or perhaps the maximum pion momentum in each event. This one must be flattened (with `axis=0`) to remove `None` values.This flattening is explicit as a reminder that empty events are not counted in the histogram.
ak.max(events.kaons.p, axis=1) ak.flatten(ak.max(events.kaons.p, axis=1), axis=0) hep.histplot(bh.Histogram(bh.axis.Regular(100, 0, 10)).fill( ak.flatten(ak.max(events.kaons.p, axis=1), axis=0) ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Or perhaps the momentum of the kaon with the farthest vertex. [ak.argmax](https://awkward-array.readthedocs.io/en/latest/_auto/ak.argmax.html) creates an array of integers selecting from each event.
?ak.argmax ak.argmax(abs(events.kaons.vtx), axis=1) ?ak.singletons # Get a length-1 list containing the index of the biggest vertex when there is one # And a length-0 list when there isn't one ak.singletons(ak.argmax(abs(events.kaons.vtx), axis=1)) # A nested integer array like this is what we need to select kaons with the biggest vertex events.kaons[ak.singletons(ak.argmax(abs(events.kaons.vtx), axis=1))] events.kaons[ak.singletons(ak.argmax(abs(events.kaons.vtx), axis=1))].p # Flatten the distinction between length-1 lists and length-0 lists ak.flatten(events.kaons[ak.singletons(ak.argmax(abs(events.kaons.vtx), axis=1))].p) # Putting it all together... hep.histplot(bh.Histogram(bh.axis.Regular(100, 0, 10)).fill( ak.flatten(events.kaons[ak.singletons(ak.argmax(abs(events.kaons.vtx), axis=1))].p) ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
If you're sending the data to a library that expects rectilinear structure, you might need to pad and clip the variable length lists.[ak.pad_none](https://awkward-array.readthedocs.io/en/latest/_auto/ak.pad_none.html) puts `None` values at the end of each list to reach a minimum length.
?ak.pad_none # pad them look at the first 30 ak.pad_none(events.kaons.id, 3)[:30].tolist()
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
The lengths are still irregular, so you can also `clip=True` them.
# pad them look at the first 30 ak.pad_none(events.kaons.id, 3, clip=True)[:30].tolist()
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
The library we're sending this to might not be able to deal with missing values, so choose a replacement to fill them with.
?ak.fill_none # fill with -1 <- pad them look at the first 30 ak.fill_none(ak.pad_none(events.kaons.id, 3, clip=True), -1)[:30].tolist()
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
These are still Awkward-brand arrays; the downstream library might complain if they're not NumPy-brand, so use [ak.to_numpy](https://awkward-array.readthedocs.io/en/latest/_auto/ak.to_numpy.html) or simply cast it with NumPy's `np.asarray`.
?ak.to_numpy np.asarray(ak.fill_none(ak.pad_none(events.kaons.id, 3, clip=True), -1))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
If you try to convert an Awkward Array as NumPy and structure would be lost, you get an error. (You won't accidentally eliminate structure.)
try: np.asarray(events.kaons.id) except Exception as err: print(type(err), str(err))
<class 'ValueError'> in ListOffsetArray64, cannot convert to RegularArray because subarray lengths are not regular
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Broadcasting flat arrays and jagged arraysNumPy lets you combine arrays and scalars in a mathematical expression by first "broadcasting" the scalar to an array of the same length.
print(np.array([1, 2, 3, 4, 5]) + 100)
[101 102 103 104 105]
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Awkward Array does the same thing, except that each element of a flat array can be broadcasted to each nested list of a jagged array.
print(ak.Array([[1, 2, 3], [], [4, 5], [6]]) + np.array([100, 200, 300, 400]))
[[101, 102, 103], [], [304, 305], [406]]
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
This is useful for emulating```pythonall_vertices = []for event in events: vertices = [] for kaon in events.kaons: all_vertices.append((kaon.vtx.x - event.true.x, kaon.vtx.y - event.true.y)) all_vertices.append(vertices)```where `event.true.x` and `y` have only one value per event but `kaon.vtx.x` and `y` have one per kaon.
# one value per kaon one per event ak.zip([events.kaons.vtx.x - events.true.x, events.kaons.vtx.y - events.true.y])
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
You don't have to do anything special for this: broadcasting is a common feature of all functions that apply to more than one array.You can get it explicitly with [ak.broadcast_arrays](https://awkward-array.readthedocs.io/en/latest/_auto/ak.broadcast_arrays.html).
?ak.broadcast_arrays ak.broadcast_arrays(events.true.x, events.kaons.vtx.x)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Combinatorics: cartesian and combinationsAt all levels of a physics analysis, we need to compare objects drawn from different collections. * **Gen-reco matching:** to associate a reconstructed particle with its generator-level parameters. * **Cleaning:** assocating soft photons with a reconstructed electron or leptons to a jet. * **Bump-hunting:** looking for mass peaks in pairs of particles. * **Dalitz analysis:** looking for resonance structure in triples of particles.To do this with array-at-a-time operations, use one function to generate all the combinations, apply "flat" operations,then use "reducers" to get one value per particle or per event again. Cartesian and combinationsThe two main "explode" operations are [ak.cartesian](https://awkward-array.readthedocs.io/en/latest/_auto/ak.cartesian.html) and [ak.combinations](https://awkward-array.readthedocs.io/en/latest/_auto/ak.combinations.html).The first generates the **Cartesian product** (a.k.a. cross product) of two collections **per nested list**.The second generates **distinct combinations** (i.e. "n choose k") of a collection with itself **per nested list**.
?ak.cartesian ?ak.combinations ak.to_list(ak.cartesian(([[1, 2, 3], [], [4]], [["a", "b"], ["c"], ["d", "e"]]))) ak.to_list(ak.combinations([["a", "b", "c", "d"], [], [1, 2]], 2))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
To search for $\Lambda^0 \to \pi p$, we need to compute the mass of pairs drawn from these two collections.
pairs = ak.cartesian([events.pions, events.protons]) pairs ?ak.unzip def mass(pairs, left_mass, right_mass): left, right = ak.unzip(pairs) left_energy = np.sqrt(left.p**2 + left_mass**2) right_energy = np.sqrt(right.p**2 + right_mass**2) return np.sqrt((left_energy + right_energy)**2 - (left.px + right.px)**2 - (left.py + right.py)**2 - (left.pz + right.pz)**2) mass(pairs, 0.139570, 0.938272) hep.histplot(bh.Histogram(bh.axis.Regular(100, 1.115683 - 0.01, 1.115683 + 0.01)).fill( ak.flatten(mass(pairs, 0.139570, 0.938272)) ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
We can improve the peak by selecting for opposite charges and large vertexes.
def opposite(pairs): left, right = ak.unzip(pairs) return pairs[left.charge != right.charge] def distant(pairs): left, right = ak.unzip(pairs) return pairs[np.logical_and(abs(left.vtx) > 0.10, abs(right.vtx) > 0.10)] hep.histplot(bh.Histogram(bh.axis.Regular(100, 1.115683 - 0.01, 1.115683 + 0.01)).fill( ak.flatten(mass(distant(opposite(pairs)), 0.139570, 0.938272)) ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Alternatively, all of these functions could have been methods on the pair objects for reuse.(This is to make the point that any kind of object can have methods, not just particles.)
class ParticlePairArray(ak.Array): __name__ = "Pairs" def mass(self, left_mass, right_mass): left, right = self.slot0, self.slot1 left_energy = np.sqrt(left.p**2 + left_mass**2) right_energy = np.sqrt(right.p**2 + right_mass**2) return np.sqrt((left_energy + right_energy)**2 - (left.px + right.px)**2 - (left.py + right.py)**2 - (left.pz + right.pz)**2) def opposite(self): return self[self.slot0.charge != self.slot1.charge] def distant(self, cut): return self[np.logical_and(abs(self.slot0.vtx) > cut, abs(self.slot1.vtx) > cut)] ak.behavior["*", "pair"] = ParticlePairArray pairs = ak.cartesian([events.pions, events.protons], with_name="pair") pairs hep.histplot(bh.Histogram(bh.axis.Regular(100, 1.115683 - 0.01, 1.115683 + 0.01)).fill( ak.flatten(pairs.opposite().distant(0.10).mass(0.139570, 0.938272)) ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
**Self-study question:** why does the call to `mass` have to be last? An example for `ak.combinations`: $K_S \to \pi\pi$.
pairs = ak.combinations(events.pions, 2, with_name="pair") pairs hep.histplot(bh.Histogram(bh.axis.Regular(100, 0.497611 - 0.015, 0.497611 + 0.015)).fill( ak.flatten(pairs.opposite().distant(0.10).mass(0.139570, 0.139570)) ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
**Bonus problem:** $D^0 \to K^- \pi^+ \pi^0$
pizero_candidates = ak.combinations(events.prt[events.prt.pdg == Particle.from_string("gamma").pdgid], 2, with_name="pair") pizero = pizero_candidates[pizero_candidates.mass(0, 0) - 0.13498 < 0.000001] pizero["px"] = pizero.slot0.px + pizero.slot1.px pizero["py"] = pizero.slot0.py + pizero.slot1.py pizero["pz"] = pizero.slot0.pz + pizero.slot1.pz pizero["p"] = np.sqrt(pizero.px**2 + pizero.py**2 + pizero.pz**2) pizero kminus = events.prt[events.prt.pdg == Particle.from_string("K-").pdgid] piplus = events.prt[events.prt.pdg == Particle.from_string("pi+").pdgid] triples = ak.cartesian({"kminus": kminus[abs(kminus.vtx) > 0.03], "piplus": piplus[abs(piplus.vtx) > 0.03], "pizero": pizero[np.logical_and(abs(pizero.slot0.vtx) > 0.03, abs(pizero.slot1.vtx) > 0.03)]}) triples ak.num(triples) def mass2(left, left_mass, right, right_mass): left_energy = np.sqrt(left.p**2 + left_mass**2) right_energy = np.sqrt(right.p**2 + right_mass**2) return ((left_energy + right_energy)**2 - (left.px + right.px)**2 - (left.py + right.py)**2 - (left.pz + right.pz)**2) mKpi = mass2(triples.kminus, 0.493677, triples.piplus, 0.139570) mpipi = mass2(triples.piplus, 0.139570, triples.pizero, 0.1349766)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
This Dalitz plot doesn't look right (doesn't cut off at kinematic limits), but I'm going to leave it as an exercise for the reader.
dalitz = bh.Histogram(bh.axis.Regular(50, 0, 3), bh.axis.Regular(50, 0, 2)) dalitz.fill(ak.flatten(mKpi), ak.flatten(mpipi)) X, Y = dalitz.axes.edges fig, ax = plt.subplots() mesh = ax.pcolormesh(X.T, Y.T, dalitz.view().T) fig.colorbar(mesh)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Reducing from combinationsThe mass-peak examples above don't need to "reduce" combinations, but many applications do. Suppose that we want to find the "nearest photon to each electron" (e.g. bremsstrahlung).
electrons = events.prt[abs(events.prt.pdg) == abs(Particle.from_string("e-").pdgid)] photons = events.prt[events.prt.pdg == Particle.from_string("gamma").pdgid]
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
The problem with the raw output of `ak.cartesian` is that all the combinations are mixed together in the same lists.
ak.to_list(ak.cartesian([electrons[["pdg", "id"]], photons[["pdg", "id"]]])[8])
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
We can fix this by asking for `nested=True`, which adds another level of nesting to the output.
ak.to_list(ak.cartesian([electrons[["pdg", "id"]], photons[["pdg", "id"]]], nested=True)[8])
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
All electron-photon pairs associated with a given electron are grouped in a list-within-each-list.Now we can apply reducers to this inner dimension to sum over some quantity, pick the best one, etc.
def cos_angle(pairs): left, right = ak.unzip(pairs) return left.dir.x*right.dir.x + left.dir.y*right.dir.y + left.dir.z*right.dir.z electron_photons = ak.cartesian([electrons, photons], nested=True) cos_angle(electron_photons) hep.histplot(bh.Histogram(bh.axis.Regular(100, -1, 1)).fill( ak.flatten(cos_angle(electron_photons), axis=None) # a good reason to use flatten axis=None ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
We pick the "maximum according to a function" using the same `ak.singletons(ak.argmax(f(x))` trick as above.
best_electron_photons = electron_photons[ak.singletons(ak.argmax(cos_angle(electron_photons), axis=2))] hep.histplot(bh.Histogram(bh.axis.Regular(100, -1, 1)).fill( ak.flatten(cos_angle(best_electron_photons), axis=None) ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
By construction, `best_electron_photons` has zero or one elements in each *inner* nested list.
ak.num(electron_photons, axis=2), ak.num(best_electron_photons, axis=2)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Since we no longer care about that *inner* structure, we could flatten it at `axis=2` (leaving `axis=1` untouched).
best_electron_photons ak.flatten(best_electron_photons, axis=2)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
But it would be better to invert the `ak.singletons` by calling `ak.firsts`.
?ak.singletons ?ak.firsts ak.firsts(best_electron_photons, axis=2)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Because then we can get back one value for each electron (with `None` if `ak.argmax` resulted in `None` because there were no pairs).
ak.num(electrons), ak.num(ak.firsts(best_electron_photons, axis=2)) ak.all(ak.num(electrons) == ak.num(ak.firsts(best_electron_photons, axis=2)))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
And that means that we can make this "closest photon" an attribute of the electrons. We have now performed electron-photon matching.
electrons["photon"] = ak.firsts(best_electron_photons, axis=2) ak.to_list(electrons[8])
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Current set of reducers: * [ak.count](https://awkward-array.readthedocs.io/en/latest/_auto/ak.count.html): counts the number in each group (subtly different from [ak.num](https://awkward-array.readthedocs.io/en/latest/_auto/ak.num.html) because `ak.count` is a reducer) * [ak.count_nonzero](https://awkward-array.readthedocs.io/en/latest/_auto/ak.count_nonzero.html): counts the number of non-zero elements in each group * [ak.sum](https://awkward-array.readthedocs.io/en/latest/_auto/ak.sum.html): adds up values in the group, the quintessential reducer * [ak.prod](https://awkward-array.readthedocs.io/en/latest/_auto/ak.prod.html): multiplies values in the group (e.g. for charges or probabilities) * [ak.any](https://awkward-array.readthedocs.io/en/latest/_auto/ak.any.html): boolean reducer for logical `or` ("do *any* in this group satisfy a constraint?") * [ak.all](https://awkward-array.readthedocs.io/en/latest/_auto/ak.all.html): boolean reducer for logical `and` ("do *all* in this group satisfy a constraint?") * [ak.min](https://awkward-array.readthedocs.io/en/latest/_auto/ak.min.html): minimum value in each group (`None` for empty groups) * [ak.max](https://awkward-array.readthedocs.io/en/latest/_auto/ak.max.html): maximum value in each group (`None` for empty groups) * [ak.argmin](https://awkward-array.readthedocs.io/en/latest/_auto/ak.argmin.html): index of minimum value in each group (`None` for empty groups) * [ak.argmax](https://awkward-array.readthedocs.io/en/latest/_auto/ak.argmax.html): index of maximum value in each group (`None` for empty groups)And other functions that have the same interface as a reducer (reduces a dimension): * [ak.moment](https://awkward-array.readthedocs.io/en/latest/_auto/ak.moment.html): computes the $n^{th}$ moment in each group * [ak.mean](https://awkward-array.readthedocs.io/en/latest/_auto/ak.mean.html): computes the mean in each group * [ak.var](https://awkward-array.readthedocs.io/en/latest/_auto/ak.var.html): computes the variance in each group * [ak.std](https://awkward-array.readthedocs.io/en/latest/_auto/ak.std.html): computes the standard deviation in each group * [ak.covar](https://awkward-array.readthedocs.io/en/latest/_auto/ak.covar.html): computes the covariance in each group * [ak.corr](https://awkward-array.readthedocs.io/en/latest/_auto/ak.corr.html): computes the correlation in each group * [ak.linear_fit](https://awkward-array.readthedocs.io/en/latest/_auto/ak.linear_fit.html): computes the linear fit in each group * [ak.softmax](https://awkward-array.readthedocs.io/en/latest/_auto/ak.softmax.html): computes the softmax function in each group Imperative, but still fast, programming in NumbaArray-at-a-time operations let us manipulate dynamically typed data with compiled code (and in some cases, benefit from hardware vectorization). However, they're complicated. Finding the closest photon to each electron is more complicated than it seems it ought to be.Some of these things are simpler in imperative (step-by-step scalar-at-a-time) code. Imperative Python code is slow because it has to check the data type of every object it enounters (among other things); compiled code is faster because these checks are performed once during a compilation step for any number of identically typed values.We can get the best of both worlds by Just-In-Time (JIT) compiling the code. [Numba](http://numba.pydata.org/) is a NumPy-centric JIT compiler for Python.
import numba as nb @nb.jit def monte_carlo_pi(nsamples): acc = 0 for i in range(nsamples): x = np.random.random() y = np.random.random() if (x**2 + y**2) < 1.0: acc += 1 return 4.0 * acc / nsamples %%timeit # Run the pure Python function (without nb.jit) monte_carlo_pi.py_func(1000000) %%timeit # Run the compiled function monte_carlo_pi(1000000)
8.7 ms ± 194 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
The price for this magical speedup is that not all Python code can be accelerated; you have to be conservative with the functions and language features you use, and Numba has to recognize the data types.Numba recognizes Awkward Arrays.
@nb.jit def lambda_mass(events): num_lambdas = 0 for event in events: num_lambdas += len(event.pions) * len(event.protons) lambda_masses = np.empty(num_lambdas, np.float64) i = 0 for event in events: for pion in event.pions: for proton in event.protons: pion_energy = np.sqrt(pion.p**2 + 0.139570**2) proton_energy = np.sqrt(proton.p**2 + 0.938272**2) mass = np.sqrt((pion_energy + proton_energy)**2 - (pion.px + proton.px)**2 - (pion.py + proton.py)**2 - (pion.pz + proton.pz)**2) lambda_masses[i] = mass i += 1 return lambda_masses lambda_mass(events) hep.histplot(bh.Histogram(bh.axis.Regular(100, 1.115683 - 0.01, 1.115683 + 0.01)).fill( lambda_mass(events) ))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Some constraints: * Awkward arrays are read-only structures (always true, even outside of Numba) * Awkward arrays can't be created inside a Numba-compiled functionThat was fine for a function that creates and returns a NumPy array, but what if we want to create something with structure? The [ak.ArrayBuilder](https://awkward-array.readthedocs.io/en/latest/_auto/ak.ArrayBuilder.html) is a general way to make data structures.
?ak.ArrayBuilder builder = ak.ArrayBuilder() builder.begin_list() builder.begin_record() builder.field("x").integer(1) builder.field("y").real(1.1) builder.field("z").string("one") builder.end_record() builder.begin_record() builder.field("x").integer(2) builder.field("y").real(2.2) builder.field("z").string("two") builder.end_record() builder.end_list() builder.begin_list() builder.end_list() builder.begin_list() builder.begin_record() builder.field("x").integer(3) builder.field("y").real(3.3) builder.field("z").string("three") builder.end_record() builder.end_list() ak.to_list(builder.snapshot())
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
ArrayBuilders can be used in Numba, albeit with some constraints: * ArrayBuilders can't be created inside a Numba-compiled function (pass them in) * The `snapshot` method (to turn it into an array) can't be used in a Numba-compiled function (use it outside)
@nb.jit(nopython=True) def make_electron_photons(events, builder): for event in events: builder.begin_list() for electron in event.electrons: best_i = -1 best_angle = -1.0 for i in range(len(event.photons)): photon = event.photons[i] angle = photon.dir.x*electron.dir.x + photon.dir.y*electron.dir.y + photon.dir.z*electron.dir.z if angle > best_angle: best_i = i best_angle = angle if best_i == -1: builder.null() else: builder.append(photon) builder.end_list() events["electrons"] = events.prt[abs(events.prt.pdg) == abs(Particle.from_string("e-").pdgid)] events["photons"] = events.prt[events.prt.pdg == Particle.from_string("gamma").pdgid] builder = ak.ArrayBuilder() make_electron_photons(events, builder) builder.snapshot()
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
A few of them are `None` (called `builder.null()` because there were no photons to attach to the electron).
ak.count_nonzero(ak.is_none(ak.flatten(builder.snapshot())))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
But the `builder.snapshot()` otherwise matches up with the `events.electrons`, so it's something we could attach to it, as before.
?ak.with_field events["electrons"] = ak.with_field(events.electrons, builder.snapshot(), "photon") ak.to_list(events[8].electrons)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Grafting jagged data onto PandasAwkward Arrays can be Pandas columns.
import pandas as pd df = pd.DataFrame({"pions": events.pions, "kaons": events.kaons, "protons": events.protons}) df df["pions"].dtype
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
But that's unlikely to be useful for very complex data structures because there aren't any Pandas functions for deeply nested structure.Instead, you'll probably want to *convert* the nested structures into the corresponding Pandas [MultiIndex](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html).
ak.pandas.df(events.pions)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Now the nested lists are represented as MultiIndex rows and the nested records are represented as MultiIndex columns, which are structures that Pandas knows how to deal with. But what about two types of particles, pions and kaons? (And let's simplify to just `"px", "py", "pz", "vtx"`.)
simpler = ak.zip({"pions": events.pions[["px", "py", "pz", "vtx"]], "kaons": events.kaons[["px", "py", "pz", "vtx"]]}, depthlimit=1) ak.type(simpler) ak.pandas.df(simpler)
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
There's only one row MultiIndex, so pion 1 in each event is the same row as kaon 1. That assocation is probably meaningless.The issue is that a single Pandas DataFrame represents *less* information than an Awkward Array. In general, we would need a collection of DataFrames to losslessly encode an Awkward Array. (Pandas represents the data in [database normal form](https://en.wikipedia.org/wiki/Database_normalization); Awkward represents it in objects.)
# This array corresponds to *two* Pandas DataFrames. pions_df, kaons_df = ak.pandas.dfs(simpler) pions_df kaons_df
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
NumExpr, Autograd, and other third-party libraries [NumExpr](https://numexpr.readthedocs.io/en/latest/user_guide.html) can calcuate pure numerical expressions faster than NumPy because it does so in one pass. (It has a low-overhead virtual machine.)NumExpr doesn't recognize Awkward Arrays, but we have a wrapper for it.
import numexpr # This works because px, py, pz are flat, like NumPy px = ak.flatten(events.pions.px) py = ak.flatten(events.pions.py) pz = ak.flatten(events.pions.pz) numexpr.evaluate("px**2 + py**2 + pz**2") # This doesn't work because px, py, pz have structure px = events.pions.px py = events.pions.py pz = events.pions.pz try: numexpr.evaluate("px**2 + py**2 + pz**2") except Exception as err: print(type(err), str(err)) # But in this wrapped version, we broadcast and maintain structure ak.numexpr.evaluate("px**2 + py**2 + pz**2")
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
Similarly for [Autograd](https://github.com/HIPS/autogradreadme), which has an `elementwise_grad` for differentiating expressions with respect to NumPy [universal functions](https://docs.scipy.org/doc/numpy/reference/ufuncs.html), but not Awkward Arrays.
@ak.autograd.elementwise_grad def tanh(x): y = np.exp(-2.0 * x) return (1.0 - y) / (1.0 + y) ak.to_list(tanh([{"x": 0.0, "y": []}, {"x": 0.1, "y": [1]}, {"x": 0.2, "y": [2, 2]}, {"x": 0.3, "y": [3, 3, 3]}]))
_____no_output_____
BSD-3-Clause
docs-jupyter/2020-04-08-eic-jlab.ipynb
reikdas/awkward-1.0
说明: 给定由数字(‘0’-‘9’)和‘’组成的字符串s。 我们希望将s映射到英文小写字符,如下所示: 1、字符(‘a’到‘i’)分别由(‘1’到‘9’)表示。 2、字符(‘j’到‘z’)分别由(‘10’到‘26’)表示。 返回映射后形成的字符串。 可以保证唯一的映射将始终存在。Example 1: Input: s = "101112" Output: "jkab" Explanation: "j" -> "10" , "k" -> "11" , "a" -> "1" , "b" -> "2".Example 2: Input: s = "1326" Output: "acz"Example 3: Input: s = "25" Output: "y"Example 4: Input: s = "1234567891011121314151617181920212223242526" Output: "abcdefghijklmnopqrstuvwxyz"Constraints: 1、1 <= s.length <= 1000 2、s[i] only contains digits letters ('0'-'9') and '' letter. 3、s will be valid string such that mapping is always possible.
class Solution: def freqAlphabets(self, s: str) -> str: res = '' idx = 0 while idx < len(s): if idx + 2 < len(s) and 1 <= int(s[idx]) <= 2 and s[idx + 2] == '#': val = int(s[idx] + s[idx + 1]) idx += 2 else: val = int(s[idx]) res += chr(val + 96) idx += 1 return res solution = Solution() solution.freqAlphabets("10#11#12") 96 + 1
_____no_output_____
Apache-2.0
String/1013/1309. Decrypt String from Alphabet to Integer Mapping.ipynb
YuHe0108/Leetcode
IEEE-CIS Fraud Detection Can you detect fraud from customer transactions?
# Análise dos dados import pandas as pd # Visualização dos dados import matplotlib.pyplot as plt import seaborn as sn
_____no_output_____
MIT
Mentoria Fraudes/Mentoria - Fraudes Leon.ipynb
leon-maia/Portfolio-Voyager
SampleSubmission é o formato de entrega do modelo. Desconsiderar Dataset.
df_SampleSubmission = pd.read_csv('sample_submission.csv') df_SampleSubmission.head()
_____no_output_____
MIT
Mentoria Fraudes/Mentoria - Fraudes Leon.ipynb
leon-maia/Portfolio-Voyager
Analisando o dataset test_identity.csv MetadadosIdentity TableVariables in this table are identity information – network connection information (IP, ISP, Proxy, etc) and digital signature (UA/browser/os/version, etc) associated with transactions.They're collected by Vesta’s fraud protection system and digital security partners.(The field names are masked and pairwise dictionary will not be provided for privacy protection and contract agreement)Categorical Features:DeviceTypeDeviceInfoid_12 - id_38DeviceInfo : https://www.kaggle.com/c/ieee-fraud-detection/discussion/101203583227“id01 to id11 are numerical features for identity, which is collected by Vesta and security partners such as device rating, ip_domain rating, proxy rating, etc. Also it recorded behavioral fingerprint like account login times/failed to login times, how long an account stayed on the page, etc. All of these are not able to elaborate due to security partner T&C. I hope you could get basic meaning of these features, and by mentioning them as numerical/categorical, you won't deal with them inappropriately.”
df_test_identity = pd.read_csv('test_identity.csv') l, c = df_test_identity.shape l df_test_identity.head() df_test_identity.tail() df_test_identity.info() df_test_identity.isnull().sum().sort_values(ascending=False) (df_test_identity.isnull().sum().sort_values(ascending=False) / l) * 100 df_test_identity_corr = df_test_identity.corr() sn.heatmap(df_test_identity_corr, vmin=0, vmax=1)
_____no_output_____
MIT
Mentoria Fraudes/Mentoria - Fraudes Leon.ipynb
leon-maia/Portfolio-Voyager
Analisando o dataset test_transaction.csv Transaction table“It contains money transfer and also other gifting goods and service, like you booked a ticket for others, etc.”TransactionDT: timedelta from a given reference datetime (not an actual timestamp)“TransactionDT first value is 86400, which corresponds to the number of seconds in a day (60 * 60 * 24 = 86400) so I think the unit is seconds. Using this, we know the data spans 6 months, as the maximum value is 15811131, which would correspond to day 183.”TransactionAMT: transaction payment amount in USD“Some of the transaction amounts have three decimal places to the right of the decimal point. There seems to be a link to three decimal places and a blank addr1 and addr2 field. Is it possible that these are foreign transactions and that, for example, the 75.887 in row 12 is the result of multiplying a foreign currency amount by an exchange rate?”ProductCD: product code, the product for each transaction“Product isn't necessary to be a real 'product' (like one item to be added to the shopping cart). It could be any kind of service.”card1 - card6: payment card information, such as card type, card category, issue bank, country, etc.addr: address“both addresses are for purchaseraddr1 as billing regionaddr2 as billing country”dist: distance"distances between (not limited) billing address, mailing address, zip code, IP address, phone area, etc.”P_ and (R__) emaildomain: purchaser and recipient email domain“ certain transactions don't need recipient, so R_emaildomain is null.”C1-C14: counting, such as how many addresses are found to be associated with the payment card, etc. The actual meaning is masked.“Can you please give more examples of counts in the variables C1-15? Would these be like counts of phone numbers, email addresses, names associated with the user? I can't think of 15.Your guess is good, plus like device, ipaddr, billingaddr, etc. Also these are for both purchaser and recipient, which doubles the number.”D1-D15: timedelta, such as days between previous transaction, etc.M1-M9: match, such as names on card and address, etc.Vxxx: Vesta engineered rich features, including ranking, counting, and other entity relations.“For example, how many times the payment card associated with a IP and email or address appeared in 24 hours time range, etc.”"All Vesta features were derived as numerical. some of them are count of orders within a clustering, a time-period or condition, so the value is finite and has ordering (or ranking). I wouldn't recommend to treat any of them as categorical. If any of them resulted in binary by chance, it maybe worth trying."
df_test_transaction = pd.read_csv('test_transaction.csv') display(df_test_transaction) l2, c2 = df_test_transaction.shape df_test_transaction.info(verbose=True) # Para visualizar todas as colunas (antes de conhecer o atributo 'verbose'), criei uma Serie com o nome das colunas df_test_transactionColumns = pd.Series(df_test_transaction.columns) df_test_transactionColumns # A ideia era expandir as colunas ao máximo, pois ao visualizar a serie acima os dados permaneceram truncados pd.set_option('display.max_columns', None) # Dai lembrei do Método unique df_test_transactionColumns.unique() df_test_transaction.isnull().sum().sort_values(ascending=False) df_test_transaction_corr = df_test_transaction.corr() # Tá o caos plt.figure(figsize=(15,8)) sn.heatmap(df_test_transaction_corr, vmin=0, vmax=1) plt.show()
_____no_output_____
MIT
Mentoria Fraudes/Mentoria - Fraudes Leon.ipynb
leon-maia/Portfolio-Voyager
High-level RNN TF Example
import numpy as np import os import sys import tensorflow as tf from common.params_lstm import * from common.utils import * # Force one-gpu os.environ["CUDA_VISIBLE_DEVICES"] = "0" print("OS: ", sys.platform) print("Python: ", sys.version) print("Numpy: ", np.__version__) print("Tensorflow: ", tf.__version__) print("GPU: ", get_gpu_name()) print(get_cuda_version()) print("CuDNN Version ", get_cudnn_version()) def create_symbol(CUDNN=True, maxf=MAXFEATURES, edim=EMBEDSIZE, nhid=NUMHIDDEN, batchs=BATCHSIZE): word_vectors = tf.contrib.layers.embed_sequence(X, vocab_size=maxf, embed_dim=edim) word_list = tf.unstack(word_vectors, axis=1) if not CUDNN: cell = tf.contrib.rnn.GRUCell(nhid) outputs, states = tf.contrib.rnn.static_rnn(cell, word_list, dtype=tf.float32) else: # Using cuDNN since vanilla RNN from tensorflow.contrib.cudnn_rnn.python.ops import cudnn_rnn_ops cudnn_cell = cudnn_rnn_ops.CudnnGRU(num_layers=1, num_units=nhid, input_size=edim, input_mode='linear_input') params_size_t = cudnn_cell.params_size() params = tf.Variable(tf.random_uniform([params_size_t], -0.1, 0.1), validate_shape=False) input_h = tf.Variable(tf.zeros([1, batchs, nhid])) outputs, states = cudnn_cell(input_data=word_list, input_h=input_h, params=params) logits = tf.layers.dense(outputs[-1], 2, activation=None, name='output') return logits def init_model(m, y, lr=LR, b1=BETA_1, b2=BETA_2, eps=EPS): # Single-class labels, don't need dense one-hot # Expects unscaled logits, not output of tf.nn.softmax xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=m, labels=y) loss = tf.reduce_mean(xentropy) optimizer = tf.train.AdamOptimizer(lr, b1, b2, eps) training_op = optimizer.minimize(loss) return training_op %%time # Data into format for library x_train, x_test, y_train, y_test = imdb_for_library(seq_len=MAXLEN, max_features=MAXFEATURES) print(x_train.shape, x_test.shape, y_train.shape, y_test.shape) print(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype) %%time # Place-holders X = tf.placeholder(tf.int32, shape=[None, MAXLEN]) y = tf.placeholder(tf.int32, shape=[None]) sym = create_symbol() %%time model = init_model(sym, y) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) %%time # Main training loop: 22s correct = tf.nn.in_top_k(sym, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) for j in range(EPOCHS): for data, label in yield_mb(x_train, y_train, BATCHSIZE, shuffle=True): sess.run(model, feed_dict={X: data, y: label}) # Log acc_train = sess.run(accuracy, feed_dict={X: data, y: label}) print(j, "Train accuracy:", acc_train) %%time # Main evaluation loop: 9.19s n_samples = (y_test.shape[0]//BATCHSIZE)*BATCHSIZE y_guess = np.zeros(n_samples, dtype=np.int) y_truth = y_test[:n_samples] c = 0 for data, label in yield_mb(x_test, y_test, BATCHSIZE): pred = tf.argmax(sym, 1) output = sess.run(pred, feed_dict={X: data}) y_guess[c*BATCHSIZE:(c+1)*BATCHSIZE] = output c += 1 print("Accuracy: ", 1.*sum(y_guess == y_truth)/len(y_guess))
Accuracy: 0.8598557692307692
MIT
notebooks/Tensorflow_RNN.ipynb
ThomasDelteil/DeepLearningFrameworks
Question Engagement Analysis Select course and load data set
data_dir = '/Users/benny/data/L@S_2021' course = 'microbiology' data_set_filename = f'engagement_{course}.txt' data_set = pd.read_csv( f'{data_dir}/{data_set_filename}', sep='\t' ) data_set
_____no_output_____
CC-BY-4.0
l@s-2021/Question Engagement Analysis.ipynb
vitalsource/data
Mean engagement
data_set.groupby( 'question_type' ).mean().sort_values( by='answered', ascending=False )
_____no_output_____
CC-BY-4.0
l@s-2021/Question Engagement Analysis.ipynb
vitalsource/data
Regression model
%%R library( lme4 )
R[write to console]: Loading required package: Matrix
CC-BY-4.0
l@s-2021/Question Engagement Analysis.ipynb
vitalsource/data
Standardize the continuous variables.
for col in [ 'course_page_number', 'unit_page_number', 'module_page_number', 'page_question_number' ]: data_set[ col ] = ( data_set[ col ] - data_set[ col ].mean() ) / data_set[ col ].std() data_set.to_csv( '/tmp/to_r.csv', index=False ) %%R df <- read.csv( '/tmp/to_r.csv' ) %%R lme.model <- glmer( answered ~ course_page_number + unit_page_number + module_page_number + page_question_number + question_type + (1|student) + (1|question), family=binomial(link=logit), data=df, control=glmerControl( optimizer="bobyqa", optCtrl=list(maxfun=2e4) ) ) summary( lme.model )
R[write to console]: fixed-effect model matrix is rank deficient so dropping 1 column / coefficient
CC-BY-4.0
l@s-2021/Question Engagement Analysis.ipynb
vitalsource/data
1- Write a list comprehensions that contains numbers from 1 to 31, append the prefix "1-" to the numbers, these are day of January
L = [f"1-{day}" for day in range(1,32)]
_____no_output_____
MIT
10-warmup-solution_comprehensions.ipynb
hanisaf/advanced-data-management-and-analytics
2- convert the list to a string so each entry prints on one line. Print the result
line = '\n'.join(L) print(line) from functools import reduce line = reduce(lambda x, y: f"{x}\n{y}", L) print(line) def combine(x, y): return x + '\n' + y line = reduce(combine, L) print(line)
_____no_output_____
MIT
10-warmup-solution_comprehensions.ipynb
hanisaf/advanced-data-management-and-analytics
3- Update the comprehension to vary prefix from 1- to 12- to generate all days of the year, do not worry about incorrect dates for now
L = [ f"{month}-{day}" for month in range(1, 13) for day in range(1,32) ] L
_____no_output_____
MIT
10-warmup-solution_comprehensions.ipynb
hanisaf/advanced-data-management-and-analytics
4- now, address the issue of some months being only 30 days and february if 28 days (this year). Hint, use the `valid_date` function
def valid_date(day, month): days_of_month = {1:31, 2:28, 3:31, 4:30, 5:31, 6:30, 7:31, 8:31, 9:30, 10:31, 11:30, 12:31} max_month = days_of_month[month] return day <= max_month L = [ f"{month}-{day}" for month in range(1, 13) for day in range(1,32) if valid_date(day, month)] L
_____no_output_____
MIT
10-warmup-solution_comprehensions.ipynb
hanisaf/advanced-data-management-and-analytics
5- using dictionary comprehensions create `f2e` dictionary from the `e2f` dictionary
e2f = {'hi':'bonjour', 'bye':'au revoir', 'bread':'pain', 'water':'eau'} f2e = { e2f[k]:k for k in e2f} f2e f2e = {item[1]:item[0] for item in e2f.items()} f2e f2e = {v:k for (k,v) in e2f.items()} f2e f2e = {e2f[k]:k for k in e2f}
_____no_output_____
MIT
10-warmup-solution_comprehensions.ipynb
hanisaf/advanced-data-management-and-analytics
div.container { width: 100% }Tutorial 4. Interlinked Plots hvPlot allows you to generate a number of different types of plot quickly from a standard API, returning Bokeh-based [HoloViews](https://holoviews.org) objects as discussed in the previous notebook. Each initial plot will make some aspects of the data clear, and using the automatic interactive Bokeh pan, zoom, and hover tools you can find additional trends and outliers at different spatial locations and spatial scales within each plot.Beyond what you can discover from each plot individually, how do you understand how the various plots relate to each other? For instance, imagine you have a data frame with columns _u_, _v_, _w_, _z_, and have separate plots of _u_ vs. _v_, _u_ vs. _w_, and _w_ vs. _z_. If you see a few outliers or a clump of unusual datapoints in your _u_ vs. _v_ plot, how can you find out the properties of those points in the _w_ vs. _z_ or other plots? Are those unusual _u_ vs. _v_ points typically high _w_, uniformly distributed along _w_, or some other pattern? To help understand multicolumnar and multidimensional datasets like this, scientists will often build complex multi-pane dashboards with custom functionality. HoloViz (and specifically Panel) tools are great for such dashboards, but here we can actually use the fact that hvPlot returns HoloViews objects to get quite sophisticated interlinking ([linked brushing](http://holoviews.org/user_guide/Linked_Brushing.html)) "for free", without needing to build any dashboard. HoloViews objects store metadata about what dimensions they cover, and we can use this metadata programmatically to let the user see how any data points in any plot relate across different plots. To see how this works, let us get back to the example we were working on at the end of the last notebook:
import holoviews as hv import pandas as pd import hvplot.pandas # noqa import colorcet as cc
_____no_output_____
BSD-3-Clause
examples/tutorial/04_Interlinked_Plots.ipynb
maximlt/holoviz
First let us load the data as before:
%%time df = pd.read_parquet('../data/earthquakes-projected.parq') df.time = df.time.astype('datetime64[ns]') df = df.set_index(df.time)
_____no_output_____
BSD-3-Clause
examples/tutorial/04_Interlinked_Plots.ipynb
maximlt/holoviz
And filter to the most severe earthquakes (magnitude `> 7`):
most_severe = df[df.mag >= 7]
_____no_output_____
BSD-3-Clause
examples/tutorial/04_Interlinked_Plots.ipynb
maximlt/holoviz
Linked brushing across elementsIn the previous notebook, we saw how plot axes are automatically linked for panning and zooming when using the `+` operator, provided the dimensions match. When dimensions or an underlying index match across multiple plots, we can use a similar principle to achieve linked brushing, where user selections are also linked across plots.To illustrate, let us generate two histograms from our `most_severe_projected` DataFrame:
mag_hist = most_severe.hvplot( y='mag', kind='hist', responsive=True, min_height=150) depth_hist = most_severe.hvplot( y='depth', kind='hist', responsive=True, min_height=150)
_____no_output_____
BSD-3-Clause
examples/tutorial/04_Interlinked_Plots.ipynb
maximlt/holoviz
These two histograms are plotting two different dimensions of our earthquake dataset (magnitude and depth), derived from the same set of earthquake samples. The samples between these two histograms share an index, and the relationships between these data points can be discovered and exploited programmatically even though they are in different elements. To do this, we can create an object for linking selections across elements:
ls = hv.link_selections.instance()
_____no_output_____
BSD-3-Clause
examples/tutorial/04_Interlinked_Plots.ipynb
maximlt/holoviz
Given some HoloViews objects (elements, layouts, etc.), we can create versions of them linked to this shared linking object by calling `ls` on them:
ls(depth_hist + mag_hist)
_____no_output_____
BSD-3-Clause
examples/tutorial/04_Interlinked_Plots.ipynb
maximlt/holoviz
Try using the first Bokeh tool to select areas of either histogram: you'll then see both the depth and magnitude distributions for the bins you have selected, compared to the overall distribution. By default, selections on both histograms are combined so that the selection is the intersection of the two regions selected (data points matching _both_ the constraints on depth and the constraints on magnitude that you select). For instance, try selecting the deepest earthquakes (around 600), and you can see that those are not specific to one particular magnitude. You can then further select a particular magnitude range, and see how that range is distributed in depth over the selected depth range. Linked selections like this make it feasible to look at specific regions of a multidimensional space and see how the properties of those regions compare to the properties of other regions. You can use the Bokeh reset tool (double arrow) to clear your selection.Note that these two histograms are derived from the same `DataFrame` and created in the same call to `ls`, but neither of those is necessary to achieve the linked behavior! If linking two different `DataFrames`, the important thing to check is that any columns with the same name actually do have the same meaning, and that any index columns match, so that the plots you are visualizing make sense when linked together. Linked brushing across element typesThe previous example linked across two histograms as a first example, but nothing prevents you from linked brushing across different element types. Here are our earthquake points, also derived from the same `DataFrame`, where the only change from earlier is that we are using the reversed warm colormap (described in the previous notebook):
geo = most_severe.hvplot( 'easting', 'northing', color='mag', kind='points', tiles='ESRI', xlim=(-3e7,3e7), ylim=(-5e6,5e6), xaxis=None, yaxis=None, responsive=True, height=350, cmap = cc.CET_L4[::-1], framewise=True)
_____no_output_____
BSD-3-Clause
examples/tutorial/04_Interlinked_Plots.ipynb
maximlt/holoviz
Once again, we just need to pass our points to the `ls` object (newly declared here to be independent of the one above) to declare the linkage:
ls2 = hv.link_selections.instance() (ls2(geo + depth_hist)).cols(1)
_____no_output_____
BSD-3-Clause
examples/tutorial/04_Interlinked_Plots.ipynb
maximlt/holoviz
Now you can use the box-select tool to select earthquakes on the map and view their corresponding depth distribution, or vice versa. E.g. if you select just the earthquakes in Alaska, you can see that they tend not to be very deep underground (though that may be a sampling issue). Other selections will show other properties, in this case typically with no obvious relationship between geographic location and depth distribution. Accessing the data selectionIf you pass your `DataFrame` into the `.filter` method of your linked selection object, you can apply the active filter that you specified interactively:
ls2.filter(most_severe)
_____no_output_____
BSD-3-Clause
examples/tutorial/04_Interlinked_Plots.ipynb
maximlt/holoviz
How does our lab collect data?Here was a small Python project that I thought of - are there trends in the rate of data collection in our lab at the CfA? From a qualitative sense, it always felt that when visitors come, several come at once and one would expect this would reflect in the number of scans produced in a small period of time.Another question I'd like to ask is how long do we typically accumulate data for? This is reflected in the number of "shots", i.e. the number of accumulations at a repetition rate of 5 Hz (typically).Finally, what are the most common frequencies the spectrometers are tuned to.I have to state that I'm not sure what I'll find - this is mainly an excercise in Python (Pandas/Plotly)
ft1_df = pd.read_pickle("../data/FTM1_scans.pkl") ft2_df = pd.read_pickle("../data/FTM2_scans.pkl") # Convert the datetime handling into numpy format for df in [ft1_df, ft2_df]: df["date"] = df["date"].astype("datetime64")
_____no_output_____
MIT
notebooks/1_Lab scan collection.ipynb
laserkelvin/SlowFourierTransform
Simple statistics behind the data collection, I'll be using FT1, and also exclude the last row (which is 2019).
yearly = ft1_df.groupby([ft1_df["date"].dt.year])
_____no_output_____
MIT
notebooks/1_Lab scan collection.ipynb
laserkelvin/SlowFourierTransform
Average number of scans per year
scans = ufloat( np.average(yearly["shots"].describe()["count"].iloc[:-1]), np.std(yearly["shots"].describe()["count"].iloc[:-1]) ) scans shots = ufloat( np.average(yearly["shots"].describe()["mean"].iloc[:-1]), np.std(yearly["shots"].describe()["mean"].iloc[:-1]) ) shots
_____no_output_____
MIT
notebooks/1_Lab scan collection.ipynb
laserkelvin/SlowFourierTransform
Convert this to time spent per year in days
((shots / 5.) * scans) / 60. / 60. / 24.
_____no_output_____
MIT
notebooks/1_Lab scan collection.ipynb
laserkelvin/SlowFourierTransform
What's the actual number of shots in a year?
actual_shots = ufloat( np.average(yearly.sum()["shots"].iloc[:-1]), np.std(yearly.sum()["shots"].iloc[:-1]) ) actual_shots (actual_shots / 5. / 60.) / 60. / 24.
_____no_output_____
MIT
notebooks/1_Lab scan collection.ipynb
laserkelvin/SlowFourierTransform
So approximately, the experiments are taking data only for 42 days a year total. Of course, this doesn't reflect reality (you spend most of the time trying to make the experiment work the way you want to of course). I'm also curious how this compares with other labs...
# Bin all of the data into year, month, and day grouped_dfs = [ df.groupby([df["date"].dt.year, df["date"].dt.month, df["date"].dt.day]).count() for df in [ft1_df, ft2_df] ] for df in grouped_dfs: df["cumulative"] = np.cumsum(df["id"]) flattened_dfs = [ df.set_index(df.index.map(lambda t: pd.datetime(*t))) for df in grouped_dfs ] layout = { "height": 600., "yaxis": { "title": "Number of scans", }, "xaxis": { "title": "Time" }, "title": "How we collect data", "showlegend": True, "legend": { "x": 0.1, "y": 0.95 } } fig = go.FigureWidget(layout=layout) traces = [ fig.add_scattergl(x=df.index, y=df["cumulative"], name=name) for df, name in zip(flattened_dfs, ["FT1", "FT2"]) ] isms_times = [datetime.datetime(year=year, month=6, day=17) for year in [2014, 2015, 2016, 2017, 2018]] fig.add_bar( x=isms_times, y=[2e6] * len(isms_times), width=2e6, hoverinfo="name", name="ISMS" ) fig print(plot(fig, show_link=False, link_text="", output_type="div", include_plotlyjs=False)) shot_histo = [ np.histogram(df["shots"], bins=[10, 50, 200, 500, 1000, 2000, 5000, 10000,]) for df in [ft1_df, ft2_df] ] fig = go.FigureWidget() fig.layout["xaxis"]["type"] = "log" fig.layout["yaxis"]["type"] = "log" for histo, name in zip(shot_histo, ["FT1", "FT2"]): fig.add_scatter(x=histo[1], y=histo[0], name=name) fig freq_histo = [ np.histogram(df["cavity"], bins=np.linspace(7000., 40000., 100)) for df in [ft1_df, ft2_df] ] fig = go.FigureWidget() fig.layout["xaxis"]["tickformat"] = ".," fig.layout["xaxis"]["title"] = "Frequency (MHz)" fig.layout["yaxis"]["title"] = "Counts" fig.layout["title"] = "What are the most common frequencies?" for histo, name in zip(freq_histo, ["FT1", "FT2"]): fig.add_bar(x=histo[1], y=histo[0], name=name) fig print(plot(fig, show_link=False, link_text="", output_type="div", include_plotlyjs=False))
<div><div id="b01d543c-a117-41d1-943d-7fdd17531616" style="height: 600.0px; width: 100%;" class="plotly-graph-div"></div><script type="text/javascript">window.PLOTLYENV=window.PLOTLYENV || {};window.PLOTLYENV.BASE_URL="https://plot.ly";Plotly.newPlot("b01d543c-a117-41d1-943d-7fdd17531616", [{"name": "FT1", "x": ["2014-07-08 04:00:00", "2014-07-09 04:00:00", "2014-07-10 04:00:00", "2014-07-11 04:00:00", "2014-07-13 04:00:00", "2014-07-14 04:00:00", "2014-07-15 04:00:00", "2014-07-16 04:00:00", "2014-07-17 04:00:00", "2014-07-18 04:00:00", "2014-07-21 04:00:00", "2014-07-22 04:00:00", "2014-07-23 04:00:00", "2014-07-25 04:00:00", "2014-07-28 04:00:00", "2014-07-29 04:00:00", "2014-07-30 04:00:00", "2014-07-31 04:00:00", "2014-08-01 04:00:00", "2014-08-05 04:00:00", "2014-08-06 04:00:00", "2014-08-07 04:00:00", "2014-08-11 04:00:00", "2014-08-12 04:00:00", "2014-08-13 04:00:00", "2014-08-14 04:00:00", "2014-08-15 04:00:00", "2014-08-16 04:00:00", "2014-08-18 04:00:00", "2014-08-19 04:00:00", "2014-08-21 04:00:00", "2014-08-22 04:00:00", "2014-09-03 04:00:00", "2014-09-04 04:00:00", "2014-09-08 04:00:00", "2014-09-09 04:00:00", "2014-09-10 04:00:00", "2014-09-11 04:00:00", "2014-09-12 04:00:00", "2014-09-15 04:00:00", "2014-09-17 04:00:00", "2014-09-18 04:00:00", "2014-09-19 04:00:00", "2014-09-25 04:00:00", "2014-09-26 04:00:00", "2014-10-02 04:00:00", "2014-10-03 04:00:00", "2014-10-06 04:00:00", "2014-10-08 04:00:00", "2014-10-17 04:00:00", "2014-10-20 04:00:00", "2014-10-21 04:00:00", "2014-10-22 04:00:00", "2014-10-23 04:00:00", "2014-10-24 04:00:00", "2014-11-03 05:00:00", "2014-11-05 05:00:00", "2014-11-10 05:00:00", "2014-11-12 05:00:00", "2014-11-13 05:00:00", "2014-11-14 05:00:00", "2014-11-19 05:00:00", "2014-11-25 05:00:00", "2014-12-01 05:00:00", "2014-12-02 05:00:00", "2014-12-03 05:00:00", "2014-12-04 05:00:00", "2014-12-09 05:00:00", "2014-12-10 05:00:00", "2014-12-11 05:00:00", "2014-12-12 05:00:00", "2014-12-15 05:00:00", "2014-12-16 05:00:00", "2014-12-17 05:00:00", "2014-12-18 05:00:00", "2014-12-19 05:00:00", "2014-12-20 05:00:00", "2014-12-22 05:00:00", "2014-12-23 05:00:00", "2015-01-16 05:00:00", "2015-01-21 05:00:00", "2015-01-22 05:00:00", "2015-01-26 05:00:00", "2015-01-28 05:00:00", "2015-02-04 05:00:00", "2015-02-11 05:00:00", "2015-02-12 05:00:00", "2015-02-13 05:00:00", "2015-02-19 05:00:00", "2015-02-20 05:00:00", "2015-02-24 05:00:00", "2015-02-25 05:00:00", "2015-02-26 05:00:00", "2015-02-27 05:00:00", "2015-03-10 04:00:00", "2015-03-11 04:00:00", "2015-03-12 04:00:00", "2015-03-13 04:00:00", "2015-03-14 04:00:00", "2015-03-16 04:00:00", "2015-03-20 04:00:00", "2015-03-23 04:00:00", "2015-03-26 04:00:00", "2015-03-27 04:00:00", "2015-03-30 04:00:00", "2015-03-31 04:00:00", "2015-04-01 04:00:00", "2015-04-02 04:00:00", "2015-04-03 04:00:00", "2015-04-04 04:00:00", "2015-04-06 04:00:00", "2015-04-07 04:00:00", "2015-04-14 04:00:00", "2015-04-15 04:00:00", "2015-04-16 04:00:00", "2015-04-17 04:00:00", "2015-04-18 04:00:00", "2015-04-20 04:00:00", "2015-04-22 04:00:00", "2015-04-23 04:00:00", "2015-04-24 04:00:00", "2015-04-29 04:00:00", "2015-05-05 04:00:00", "2015-05-06 04:00:00", "2015-05-07 04:00:00", "2015-05-08 04:00:00", "2015-05-11 04:00:00", "2015-05-12 04:00:00", "2015-05-14 04:00:00", "2015-05-15 04:00:00", "2015-05-18 04:00:00", "2015-05-19 04:00:00", "2015-05-20 04:00:00", "2015-05-21 04:00:00", "2015-05-26 04:00:00", "2015-05-27 04:00:00", "2015-05-28 04:00:00", "2015-05-29 04:00:00", "2015-06-01 04:00:00", "2015-06-02 04:00:00", "2015-06-03 04:00:00", "2015-06-04 04:00:00", "2015-06-05 04:00:00", "2015-06-06 04:00:00", "2015-06-07 04:00:00", "2015-06-08 04:00:00", "2015-06-09 04:00:00", "2015-06-10 04:00:00", "2015-06-11 04:00:00", "2015-06-12 04:00:00", "2015-06-15 04:00:00", "2015-06-16 04:00:00", "2015-06-17 04:00:00", "2015-06-18 04:00:00", "2015-06-19 04:00:00", "2015-06-20 04:00:00", "2015-06-22 04:00:00", "2015-06-24 04:00:00", "2015-06-25 04:00:00", "2015-06-29 04:00:00", "2015-06-30 04:00:00", "2015-07-08 04:00:00", "2015-07-14 04:00:00", "2015-07-15 04:00:00", "2015-07-16 04:00:00", "2015-07-17 04:00:00", "2015-07-20 04:00:00", "2015-07-22 04:00:00", "2015-07-23 04:00:00", "2015-07-24 04:00:00", "2015-07-25 04:00:00", "2015-07-27 04:00:00", "2015-07-28 04:00:00", "2015-07-29 04:00:00", "2015-07-30 04:00:00", "2015-07-31 04:00:00", "2015-08-05 04:00:00", "2015-08-06 04:00:00", "2015-08-07 04:00:00", "2015-08-14 04:00:00", "2015-08-16 04:00:00", "2015-08-20 04:00:00", "2015-08-21 04:00:00", "2015-08-22 04:00:00", "2015-08-23 04:00:00", "2015-08-24 04:00:00", "2015-08-25 04:00:00", "2015-08-26 04:00:00", "2015-08-27 04:00:00", "2015-08-28 04:00:00", "2015-08-29 04:00:00", "2015-08-30 04:00:00", "2015-08-31 04:00:00", "2015-09-01 04:00:00", "2015-09-02 04:00:00", "2015-09-03 04:00:00", "2015-09-04 04:00:00", "2015-09-08 04:00:00", "2015-09-09 04:00:00", "2015-09-10 04:00:00", "2015-09-11 04:00:00", "2015-09-12 04:00:00", "2015-09-13 04:00:00", "2015-09-14 04:00:00", "2015-09-16 04:00:00", "2015-09-17 04:00:00", "2015-09-18 04:00:00", "2015-09-19 04:00:00", "2015-09-20 04:00:00", "2015-09-21 04:00:00", "2015-09-22 04:00:00", "2015-09-23 04:00:00", "2015-10-06 04:00:00", "2015-10-07 04:00:00", "2015-10-08 04:00:00", "2015-10-13 04:00:00", "2015-10-14 04:00:00", "2015-10-20 04:00:00", "2015-10-21 04:00:00", "2015-10-22 04:00:00", "2015-10-23 04:00:00", "2015-10-24 04:00:00", "2015-10-25 04:00:00", "2015-10-26 04:00:00", "2015-10-28 04:00:00", "2015-10-29 04:00:00", "2015-10-30 04:00:00", "2015-10-31 04:00:00", "2015-11-02 05:00:00", "2015-11-03 05:00:00", "2015-11-04 05:00:00", "2015-11-05 05:00:00", "2015-11-06 05:00:00", "2015-11-09 05:00:00", "2015-11-10 05:00:00", "2015-11-12 05:00:00", "2015-11-13 05:00:00", "2015-11-19 05:00:00", "2015-11-20 05:00:00", "2015-11-21 05:00:00", "2015-11-22 05:00:00", "2015-11-23 05:00:00", "2015-11-24 05:00:00", "2015-11-25 05:00:00", "2015-11-30 05:00:00", "2015-12-01 05:00:00", "2015-12-02 05:00:00", "2015-12-03 05:00:00", "2015-12-04 05:00:00", "2015-12-08 05:00:00", "2015-12-09 05:00:00", "2015-12-10 05:00:00", "2015-12-11 05:00:00", "2015-12-14 05:00:00", "2015-12-15 05:00:00", "2015-12-22 05:00:00", "2015-12-23 05:00:00", "2016-01-07 05:00:00", "2016-01-19 05:00:00", "2016-01-20 05:00:00", "2016-01-21 05:00:00", "2016-01-25 05:00:00", "2016-01-26 05:00:00", "2016-01-27 05:00:00", "2016-02-03 05:00:00", "2016-02-04 05:00:00", "2016-02-05 05:00:00", "2016-02-08 05:00:00", "2016-02-09 05:00:00", "2016-02-10 05:00:00", "2016-02-11 05:00:00", "2016-02-12 05:00:00", "2016-02-15 05:00:00", "2016-02-16 05:00:00", "2016-02-18 05:00:00", "2016-02-19 05:00:00", "2016-02-20 05:00:00", "2016-02-22 05:00:00", "2016-02-23 05:00:00", "2016-02-24 05:00:00", "2016-02-25 05:00:00", "2016-03-02 05:00:00", "2016-03-03 05:00:00", "2016-03-04 05:00:00", "2016-03-05 05:00:00", "2016-03-09 05:00:00", "2016-03-10 05:00:00", "2016-03-11 05:00:00", "2016-03-14 04:00:00", "2016-03-15 04:00:00", "2016-03-18 04:00:00", "2016-03-21 04:00:00", "2016-03-22 04:00:00", "2016-03-23 04:00:00", "2016-03-24 04:00:00", "2016-03-30 04:00:00", "2016-04-27 04:00:00", "2016-04-28 04:00:00", "2016-04-29 04:00:00", "2016-04-30 04:00:00", "2016-05-01 04:00:00", "2016-05-02 04:00:00", "2016-05-03 04:00:00", "2016-05-04 04:00:00", "2016-05-05 04:00:00", "2016-05-06 04:00:00", "2016-05-09 04:00:00", "2016-05-10 04:00:00", "2016-05-13 04:00:00", "2016-05-14 04:00:00", "2016-05-15 04:00:00", "2016-05-16 04:00:00", "2016-05-17 04:00:00", "2016-05-18 04:00:00", "2016-05-19 04:00:00", "2016-05-20 04:00:00", "2016-05-23 04:00:00", "2016-05-25 04:00:00", "2016-05-26 04:00:00", "2016-05-27 04:00:00", "2016-05-31 04:00:00", "2016-06-01 04:00:00", "2016-06-02 04:00:00", "2016-06-07 04:00:00", "2016-06-08 04:00:00", "2016-06-09 04:00:00", "2016-06-10 04:00:00", "2016-06-11 04:00:00", "2016-06-13 04:00:00", "2016-06-14 04:00:00", "2016-06-15 04:00:00", "2016-06-30 04:00:00", "2016-07-01 04:00:00", "2016-07-05 04:00:00", "2016-07-11 04:00:00", "2016-07-12 04:00:00", "2016-07-13 04:00:00", "2016-07-14 04:00:00", "2016-07-15 04:00:00", "2016-07-25 04:00:00", "2016-08-04 04:00:00", "2016-08-05 04:00:00", "2016-08-09 04:00:00", "2016-08-10 04:00:00", "2016-08-11 04:00:00", "2016-08-12 04:00:00", "2016-08-13 04:00:00", "2016-08-14 04:00:00", "2016-08-15 04:00:00", "2016-08-19 04:00:00", "2016-08-20 04:00:00", "2016-08-21 04:00:00", "2016-08-22 04:00:00", "2016-08-23 04:00:00", "2016-08-24 04:00:00", "2016-08-25 04:00:00", "2016-08-26 04:00:00", "2016-08-31 04:00:00", "2016-09-01 04:00:00", "2016-09-02 04:00:00", "2016-09-03 04:00:00", "2016-09-04 04:00:00", "2016-09-05 04:00:00", "2016-09-12 04:00:00", "2016-09-14 04:00:00", "2016-09-16 04:00:00", "2016-10-03 04:00:00", "2016-10-04 04:00:00", "2016-10-05 04:00:00", "2016-10-06 04:00:00", "2016-10-07 04:00:00", "2016-10-08 04:00:00", "2016-10-10 04:00:00", "2016-10-11 04:00:00", "2016-10-12 04:00:00", "2016-10-13 04:00:00", "2016-10-14 04:00:00", "2016-10-20 04:00:00", "2016-10-21 04:00:00", "2016-10-24 04:00:00", "2016-10-25 04:00:00", "2016-10-26 04:00:00", "2016-10-27 04:00:00", "2016-10-28 04:00:00", "2016-10-30 04:00:00", "2016-10-31 04:00:00", "2016-11-01 04:00:00", "2016-11-02 04:00:00", "2016-11-03 04:00:00", "2016-11-04 04:00:00", "2016-11-05 04:00:00", "2016-11-06 04:00:00", "2016-11-07 05:00:00", "2016-11-08 05:00:00", "2016-11-09 05:00:00", "2016-11-10 05:00:00", "2016-11-11 05:00:00", "2016-11-12 05:00:00", "2016-11-13 05:00:00", "2016-11-14 05:00:00", "2016-11-15 05:00:00", "2016-11-16 05:00:00", "2016-11-17 05:00:00", "2016-11-18 05:00:00", "2016-11-21 05:00:00", "2016-11-22 05:00:00", "2016-11-23 05:00:00", "2016-11-26 05:00:00", "2016-11-28 05:00:00", "2016-11-29 05:00:00", "2016-11-30 05:00:00", "2016-12-01 05:00:00", "2016-12-02 05:00:00", "2016-12-05 05:00:00", "2016-12-06 05:00:00", "2016-12-07 05:00:00", "2016-12-08 05:00:00", "2016-12-09 05:00:00", "2016-12-12 05:00:00", "2016-12-13 05:00:00", "2016-12-14 05:00:00", "2016-12-15 05:00:00", "2016-12-16 05:00:00", "2017-01-05 05:00:00", "2017-01-06 05:00:00", "2017-01-09 05:00:00", "2017-01-10 05:00:00", "2017-01-11 05:00:00", "2017-01-12 05:00:00", "2017-01-13 05:00:00", "2017-01-16 05:00:00", "2017-01-17 05:00:00", "2017-01-18 05:00:00", "2017-01-19 05:00:00", "2017-01-20 05:00:00", "2017-01-21 05:00:00", "2017-01-23 05:00:00", "2017-01-24 05:00:00", "2017-01-25 05:00:00", "2017-01-26 05:00:00", "2017-01-27 05:00:00", "2017-01-30 05:00:00", "2017-01-31 05:00:00", "2017-02-07 05:00:00", "2017-02-08 05:00:00", "2017-02-14 05:00:00", "2017-02-15 05:00:00", "2017-02-16 05:00:00", "2017-02-17 05:00:00", "2017-03-01 05:00:00", "2017-03-02 05:00:00", "2017-03-06 05:00:00", "2017-03-07 05:00:00", "2017-03-08 05:00:00", "2017-03-09 05:00:00", "2017-03-13 04:00:00", "2017-03-15 04:00:00", "2017-03-21 04:00:00", "2017-03-22 04:00:00", "2017-03-23 04:00:00", "2017-03-24 04:00:00", "2017-03-28 04:00:00", "2017-03-29 04:00:00", "2017-03-30 04:00:00", "2017-03-31 04:00:00", "2017-04-01 04:00:00", "2017-04-02 04:00:00", "2017-04-03 04:00:00", "2017-04-04 04:00:00", "2017-04-05 04:00:00", "2017-04-06 04:00:00", "2017-04-07 04:00:00", "2017-04-10 04:00:00", "2017-04-11 04:00:00", "2017-04-12 04:00:00", "2017-04-13 04:00:00", "2017-04-14 04:00:00", "2017-04-18 04:00:00", "2017-04-19 04:00:00", "2017-04-20 04:00:00", "2017-04-21 04:00:00", "2017-04-23 04:00:00", "2017-04-24 04:00:00", "2017-04-25 04:00:00", "2017-04-26 04:00:00", "2017-04-28 04:00:00", "2017-05-01 04:00:00", "2017-05-02 04:00:00", "2017-05-03 04:00:00", "2017-05-04 04:00:00", "2017-05-05 04:00:00", "2017-05-06 04:00:00", "2017-05-07 04:00:00", "2017-05-08 04:00:00", "2017-05-09 04:00:00", "2017-05-10 04:00:00", "2017-05-11 04:00:00", "2017-05-12 04:00:00", "2017-05-13 04:00:00", "2017-05-14 04:00:00", "2017-05-15 04:00:00", "2017-05-16 04:00:00", "2017-05-17 04:00:00", "2017-05-19 04:00:00", "2017-06-01 04:00:00", "2017-06-02 04:00:00", "2017-06-05 04:00:00", "2017-06-06 04:00:00", "2017-06-08 04:00:00", "2017-06-09 04:00:00", "2017-06-12 04:00:00", "2017-06-13 04:00:00", "2017-06-26 04:00:00", "2017-06-27 04:00:00", "2017-06-28 04:00:00", "2017-06-29 04:00:00", "2017-06-30 04:00:00", "2017-07-03 04:00:00", "2017-07-12 04:00:00", "2017-07-13 04:00:00", "2017-07-14 04:00:00", "2017-07-17 04:00:00", "2017-07-19 04:00:00", "2017-07-20 04:00:00", "2017-07-25 04:00:00", "2017-07-26 04:00:00", "2017-07-27 04:00:00", "2017-07-28 04:00:00", "2017-08-03 04:00:00", "2017-08-04 04:00:00", "2017-08-05 04:00:00", "2017-08-06 04:00:00", "2017-08-07 04:00:00", "2017-08-08 04:00:00", "2017-08-09 04:00:00", "2017-08-10 04:00:00", "2017-08-11 04:00:00", "2017-08-14 04:00:00", "2017-08-15 04:00:00", "2017-08-16 04:00:00", "2017-08-17 04:00:00", "2017-08-18 04:00:00", "2017-08-21 04:00:00", "2017-08-22 04:00:00", "2017-08-25 04:00:00", "2017-08-28 04:00:00", "2017-08-29 04:00:00", "2017-08-30 04:00:00", "2017-08-31 04:00:00", "2017-09-06 04:00:00", "2017-09-12 04:00:00", "2017-09-13 04:00:00", "2017-09-14 04:00:00", "2017-09-15 04:00:00", "2017-09-18 04:00:00", "2017-09-19 04:00:00", "2017-09-20 04:00:00", "2017-09-21 04:00:00", "2017-09-25 04:00:00", "2017-09-26 04:00:00", "2017-09-29 04:00:00", "2017-10-02 04:00:00", "2017-10-03 04:00:00", "2017-10-04 04:00:00", "2017-10-05 04:00:00", "2017-10-10 04:00:00", "2017-10-16 04:00:00", "2017-10-17 04:00:00", "2017-10-18 04:00:00", "2017-10-19 04:00:00", "2017-10-24 04:00:00", "2017-10-25 04:00:00", "2017-10-26 04:00:00", "2017-10-27 04:00:00", "2017-10-30 04:00:00", "2017-10-31 04:00:00", "2017-11-02 04:00:00", "2017-11-03 04:00:00", "2017-11-07 05:00:00", "2017-11-08 05:00:00", "2017-11-09 05:00:00", "2017-11-13 05:00:00", "2017-11-14 05:00:00", "2017-11-27 05:00:00", "2017-11-28 05:00:00", "2017-11-29 05:00:00", "2017-11-30 05:00:00", "2017-12-01 05:00:00", "2017-12-04 05:00:00", "2017-12-05 05:00:00", "2017-12-06 05:00:00", "2017-12-07 05:00:00", "2017-12-08 05:00:00", "2017-12-11 05:00:00", "2017-12-12 05:00:00", "2017-12-13 05:00:00", "2017-12-14 05:00:00", "2017-12-18 05:00:00", "2017-12-21 05:00:00", "2017-12-22 05:00:00", "2018-01-02 05:00:00", "2018-01-03 05:00:00", "2018-01-04 05:00:00", "2018-01-05 05:00:00", "2018-01-06 05:00:00", "2018-01-07 05:00:00", "2018-01-08 05:00:00", "2018-01-09 05:00:00", "2018-01-10 05:00:00", "2018-01-11 05:00:00", "2018-01-12 05:00:00", "2018-01-13 05:00:00", "2018-01-14 05:00:00", "2018-01-15 05:00:00", "2018-01-16 05:00:00", "2018-01-17 05:00:00", "2018-01-18 05:00:00", "2018-01-31 05:00:00", "2018-02-01 05:00:00", "2018-02-02 05:00:00", "2018-02-05 05:00:00", "2018-02-06 05:00:00", "2018-02-07 05:00:00", "2018-02-08 05:00:00", "2018-02-09 05:00:00", "2018-02-12 05:00:00", "2018-02-13 05:00:00", "2018-02-14 05:00:00", "2018-02-22 05:00:00", "2018-02-23 05:00:00", "2018-02-24 05:00:00", "2018-02-25 05:00:00", "2018-02-26 05:00:00", "2018-02-27 05:00:00", "2018-02-28 05:00:00", "2018-03-01 05:00:00", "2018-03-02 05:00:00", "2018-03-06 05:00:00", "2018-03-07 05:00:00", "2018-03-08 05:00:00", "2018-03-09 05:00:00", "2018-03-12 04:00:00", "2018-03-14 04:00:00", "2018-03-15 04:00:00", "2018-03-23 04:00:00", "2018-03-26 04:00:00", "2018-03-27 04:00:00", "2018-03-28 04:00:00", "2018-03-29 04:00:00", "2018-03-30 04:00:00", "2018-04-02 04:00:00", "2018-04-03 04:00:00", "2018-04-04 04:00:00", "2018-04-05 04:00:00", "2018-04-12 04:00:00", "2018-04-13 04:00:00", "2018-04-16 04:00:00", "2018-04-17 04:00:00", "2018-04-18 04:00:00", "2018-04-19 04:00:00", "2018-04-27 04:00:00", "2018-07-10 04:00:00", "2018-07-11 04:00:00", "2018-07-12 04:00:00", "2018-07-13 04:00:00", "2018-07-16 04:00:00", "2018-07-17 04:00:00", "2018-07-18 04:00:00", "2018-07-19 04:00:00", "2018-07-20 04:00:00", "2018-07-23 04:00:00", "2018-07-24 04:00:00", "2018-07-25 04:00:00", "2018-07-26 04:00:00", "2018-07-27 04:00:00", "2018-07-28 04:00:00", "2018-07-29 04:00:00", "2018-07-30 04:00:00", "2018-07-31 04:00:00", "2018-08-01 04:00:00", "2018-08-02 04:00:00", "2018-08-03 04:00:00", "2018-08-04 04:00:00", "2018-08-05 04:00:00", "2018-08-06 04:00:00", "2018-08-07 04:00:00", "2018-08-13 04:00:00", "2018-08-14 04:00:00", "2018-08-15 04:00:00", "2018-08-16 04:00:00", "2018-08-17 04:00:00", "2018-08-18 04:00:00", "2018-08-19 04:00:00", "2018-08-20 04:00:00", "2018-08-21 04:00:00", "2018-08-22 04:00:00", "2018-08-23 04:00:00", "2018-08-24 04:00:00", "2018-08-25 04:00:00", "2018-08-26 04:00:00", "2018-08-27 04:00:00", "2018-08-28 04:00:00", "2018-08-29 04:00:00", "2018-08-30 04:00:00", "2018-09-04 04:00:00", "2018-09-05 04:00:00", "2018-09-07 04:00:00", "2018-09-08 04:00:00", "2018-09-09 04:00:00", "2018-09-10 04:00:00", "2018-09-11 04:00:00", "2018-09-14 04:00:00", "2018-09-19 04:00:00", "2018-09-21 04:00:00", "2018-09-24 04:00:00", "2018-09-25 04:00:00", "2018-09-26 04:00:00", "2018-09-27 04:00:00", "2018-09-28 04:00:00", "2018-10-01 04:00:00", "2018-10-02 04:00:00", "2018-10-03 04:00:00", "2018-10-05 04:00:00", "2018-10-06 04:00:00", "2018-10-07 04:00:00", "2018-10-09 04:00:00", "2018-10-10 04:00:00", "2018-10-11 04:00:00", "2018-10-12 04:00:00", "2018-10-17 04:00:00", "2018-10-18 04:00:00", "2018-10-19 04:00:00", "2018-10-22 04:00:00", "2018-10-23 04:00:00", "2018-11-08 05:00:00", "2018-11-19 05:00:00", "2018-11-28 05:00:00", "2018-12-04 05:00:00", "2018-12-05 05:00:00", "2018-12-06 05:00:00", "2018-12-07 05:00:00", "2018-12-10 05:00:00", "2018-12-11 05:00:00", "2018-12-12 05:00:00", "2018-12-13 05:00:00", "2018-12-14 05:00:00", "2018-12-20 05:00:00", "2018-12-21 05:00:00", "2019-01-07 05:00:00"], "y": [254, 12335, 12623, 12652, 23229, 23314, 23350, 23372, 23384, 23397, 23483, 26348, 26377, 30119, 31339, 31369, 31370, 31396, 31413, 31442, 31462, 31483, 32856, 35788, 38401, 41706, 41731, 41924, 53465, 59063, 59419, 59424, 59425, 59435, 59760, 59795, 59853, 59893, 59927, 59999, 60031, 60087, 60103, 60121, 60143, 60180, 60562, 61154, 61202, 61292, 61327, 61662, 62118, 62256, 62841, 62842, 62843, 62903, 63306, 64517, 66396, 66397, 66418, 66427, 70420, 73064, 73774, 74472, 76905, 77565, 78014, 80022, 87907, 88012, 92795, 99540, 100439, 104656, 106818, 106882, 107531, 107535, 107680, 107852, 107854, 107886, 110430, 112386, 112597, 112713, 116948, 126851, 130678, 130793, 133589, 137144, 137295, 150460, 151820, 152366, 152432, 152626, 162100, 164250, 165088, 167572, 169302, 169571, 170799, 170860, 174525, 176624, 180990, 195232, 205914, 210050, 213203, 219693, 224489, 233517, 236035, 236517, 238869, 241081, 243975, 245687, 247711, 249814, 252460, 253979, 254286, 255549, 258919, 264177, 264791, 267270, 269108, 271098, 271633, 273353, 273558, 273618, 274068, 274250, 274407, 274694, 275564, 277543, 282596, 283982, 285020, 288020, 288139, 288200, 288223, 288252, 291199, 293752, 293818, 295216, 298534, 298626, 298640, 300644, 308965, 317623, 318418, 328258, 344497, 357536, 375841, 384039, 391522, 392022, 417469, 419463, 423625, 427134, 427301, 427367, 427370, 429493, 435182, 438917, 445567, 452086, 458400, 466292, 469109, 473683, 485031, 487746, 488059, 488339, 488340, 488533, 488946, 489033, 490582, 492607, 492969, 497347, 498950, 498952, 501777, 515042, 516450, 545098, 551916, 552320, 552327, 552328, 552339, 553627, 553660, 553682, 553725, 553738, 553740, 554687, 555913, 559538, 562317, 567113, 567118, 568546, 572767, 583191, 586747, 587229, 588148, 592347, 595107, 596823, 600007, 601885, 603802, 603948, 604594, 606468, 608207, 609997, 611847, 612309, 614291, 614320, 614345, 614975, 615509, 615540, 620262, 621075, 621142, 621532, 622717, 624266, 624956, 624959, 625188, 625275, 625324, 625361, 625490, 625504, 626190, 626201, 626223, 626249, 627867, 630420, 631343, 634622, 634629, 634712, 634972, 644851, 650011, 652481, 653447, 655003, 655075, 658396, 662209, 665711, 667079, 669465, 684506, 695141, 719587, 719786, 719792, 721158, 722897, 724213, 724219, 724758, 725169, 725809, 726016, 726237, 726518, 729111, 729584, 729830, 730190, 730209, 731814, 732406, 733802, 735093, 735312, 737722, 738671, 739194, 741176, 741942, 743158, 743679, 743834, 744129, 747900, 747906, 747916, 751851, 752457, 752632, 752863, 752905, 752912, 754028, 754053, 754075, 754166, 754299, 755104, 758179, 759429, 762524, 765343, 765347, 765438, 765510, 767617, 768179, 769065, 780961, 795540, 809113, 824783, 826130, 845525, 852759, 864929, 889007, 914645, 954954, 981062, 981063, 981067, 981069, 981070, 981072, 981073, 981074, 981075, 983484, 983582, 983745, 984664, 985215, 996002, 1004846, 1009949, 1017686, 1019071, 1028256, 1028417, 1028429, 1028453, 1028457, 1028458, 1029991, 1039090, 1039125, 1041964, 1043851, 1044097, 1045381, 1047408, 1048458, 1048585, 1049120, 1050492, 1058450, 1059744, 1060736, 1060895, 1060904, 1061211, 1062022, 1062071, 1062381, 1062576, 1062709, 1062790, 1062975, 1063052, 1063114, 1063201, 1063650, 1063749, 1063767, 1063890, 1063899, 1063988, 1067187, 1073276, 1078453, 1078675, 1079098, 1081857, 1085397, 1085768, 1085778, 1086941, 1089899, 1091810, 1098607, 1100409, 1101032, 1101702, 1101704, 1108605, 1113303, 1127186, 1127487, 1129950, 1136478, 1141155, 1142785, 1143428, 1144498, 1144897, 1144913, 1148566, 1149204, 1152375, 1152605, 1156565, 1156591, 1156838, 1157658, 1158000, 1158124, 1158174, 1158815, 1159202, 1159251, 1159325, 1159501, 1159566, 1160896, 1165176, 1166590, 1169614, 1172626, 1177800, 1181971, 1184417, 1186548, 1188595, 1189152, 1189457, 1189741, 1190376, 1195974, 1196238, 1196248, 1196434, 1196710, 1196972, 1197183, 1197437, 1197701, 1198014, 1198023, 1198025, 1198530, 1198746, 1198903, 1201407, 1204677, 1205642, 1215619, 1220898, 1222301, 1222319, 1238289, 1270389, 1286867, 1292028, 1294507, 1294537, 1294583, 1294584, 1294675, 1296684, 1296707, 1296817, 1296922, 1298399, 1298483, 1298575, 1298885, 1299713, 1299729, 1300245, 1300267, 1300268, 1300445, 1300531, 1301304, 1301307, 1301313, 1302177, 1302467, 1302728, 1302730, 1302960, 1303272, 1304476, 1307115, 1311169, 1316506, 1320618, 1320890, 1320931, 1320991, 1321036, 1321238, 1321477, 1321599, 1321876, 1321956, 1322007, 1323017, 1323501, 1323842, 1323899, 1323937, 1324457, 1325864, 1326356, 1327863, 1331338, 1331386, 1331421, 1331454, 1333631, 1335457, 1337898, 1338209, 1338269, 1338299, 1338314, 1338921, 1339076, 1339477, 1339840, 1341539, 1341560, 1341587, 1341619, 1341641, 1342971, 1344490, 1345095, 1345855, 1348357, 1353486, 1354033, 1355129, 1355307, 1355353, 1356487, 1357320, 1357881, 1358262, 1358388, 1359738, 1360967, 1360973, 1362950, 1364792, 1366178, 1367768, 1369064, 1369065, 1391733, 1398706, 1405376, 1412989, 1419744, 1441099, 1449324, 1458614, 1461912, 1484389, 1496134, 1511336, 1539229, 1549224, 1567233, 1584878, 1586218, 1587489, 1588003, 1588218, 1588704, 1588761, 1588796, 1588802, 1588827, 1588863, 1588897, 1588902, 1589083, 1589098, 1590061, 1591246, 1591518, 1591708, 1592352, 1593562, 1593930, 1594350, 1594460, 1594911, 1595242, 1595905, 1596911, 1597610, 1599767, 1602008, 1602052, 1602108, 1602167, 1602477, 1602553, 1603072, 1603440, 1603840, 1604512, 1605404, 1608286, 1608486, 1608595, 1608697, 1610835, 1620761, 1620762, 1620765, 1620766, 1620811, 1620812, 1620835, 1620848, 1620902, 1620924, 1621128, 1621271, 1621319, 1621390, 1621469, 1621632, 1621709, 1621742, 1621796, 1622433, 1623409, 1624618, 1625809, 1626538, 1626547, 1626773, 1627516, 1629134, 1629150, 1629371, 1631067, 1631740, 1633357, 1637258, 1642148, 1649847, 1679050, 1706741, 1726834, 1756820, 1757668, 1769250, 1774590, 1780366, 1783191, 1787682, 1790717, 1791084, 1791421, 1791458, 1791518, 1791631, 1791684, 1791725, 1791739, 1791844, 1791925, 1791927, 1792090, 1792167, 1792258, 1792498, 1792575, 1792798, 1793281, 1794373, 1794517, 1794781, 1795013, 1795119, 1796859, 1799806, 1801782, 1801830, 1801858, 1801866, 1801870, 1801872, 1802064, 1805448, 1807119, 1809240, 1815007, 1816287, 1817003, 1818756, 1821614, 1821622, 1825134, 1834369], "type": "scattergl", "uid": "b840a0ac-7333-4efd-ba48-0de0188662f0"}, {"name": "FT2", "x": ["2014-01-08 05:00:00", "2014-01-09 05:00:00", "2014-01-15 05:00:00", "2014-01-17 05:00:00", "2014-01-29 05:00:00", "2014-01-30 05:00:00", "2014-02-03 05:00:00", "2014-02-06 05:00:00", "2014-03-04 05:00:00", "2014-03-05 05:00:00", "2014-03-06 05:00:00", "2014-03-10 04:00:00", "2014-03-11 04:00:00", "2014-03-12 04:00:00", "2014-03-13 04:00:00", "2014-03-14 04:00:00", "2014-03-17 04:00:00", "2014-03-26 04:00:00", "2014-04-22 04:00:00", "2014-04-23 04:00:00", "2014-05-02 04:00:00", "2014-05-05 04:00:00", "2014-05-06 04:00:00", "2014-05-07 04:00:00", "2014-05-16 04:00:00", "2014-05-21 04:00:00", "2014-05-22 04:00:00", "2014-06-03 04:00:00", "2014-06-04 04:00:00", "2014-06-05 04:00:00", "2014-06-06 04:00:00", "2014-06-09 04:00:00", "2014-06-10 04:00:00", "2014-06-11 04:00:00", "2014-06-12 04:00:00", "2014-06-13 04:00:00", "2014-06-24 04:00:00", "2014-06-25 04:00:00", "2014-06-26 04:00:00", "2014-06-27 04:00:00", "2014-06-30 04:00:00", "2014-07-01 04:00:00", "2014-07-02 04:00:00", "2014-07-03 04:00:00", "2014-07-04 04:00:00", "2014-07-07 04:00:00", "2014-07-11 04:00:00", "2014-07-14 04:00:00", "2014-07-15 04:00:00", "2014-07-16 04:00:00", "2014-07-17 04:00:00", "2014-07-25 04:00:00", "2014-07-28 04:00:00", "2014-07-29 04:00:00", "2014-09-05 04:00:00", "2014-09-23 04:00:00", "2014-09-25 04:00:00", "2014-09-26 04:00:00", "2014-09-29 04:00:00", "2014-09-30 04:00:00", "2014-10-01 04:00:00", "2014-10-02 04:00:00", "2014-10-03 04:00:00", "2014-10-21 04:00:00", "2014-10-22 04:00:00", "2014-10-24 04:00:00", "2014-10-27 04:00:00", "2014-10-29 04:00:00", "2014-10-30 04:00:00", "2014-11-10 05:00:00", "2014-11-11 05:00:00", "2014-11-13 05:00:00", "2014-11-14 05:00:00", "2014-11-19 05:00:00", "2014-11-20 05:00:00", "2014-11-25 05:00:00", "2014-12-10 05:00:00", "2014-12-11 05:00:00", "2014-12-23 05:00:00", "2015-01-13 05:00:00", "2015-01-22 05:00:00", "2015-01-23 05:00:00", "2015-01-30 05:00:00", "2015-02-26 05:00:00", "2015-02-27 05:00:00", "2015-03-30 04:00:00", "2015-03-31 04:00:00", "2015-04-01 04:00:00", "2015-04-02 04:00:00", "2015-04-03 04:00:00", "2015-04-05 04:00:00", "2015-04-06 04:00:00", "2015-04-20 04:00:00", "2015-04-21 04:00:00", "2015-04-27 04:00:00", "2015-04-28 04:00:00", "2015-04-29 04:00:00", "2015-04-30 04:00:00", "2015-05-04 04:00:00", "2015-05-06 04:00:00", "2015-05-11 04:00:00", "2015-06-30 04:00:00", "2015-07-28 04:00:00", "2015-07-30 04:00:00", "2015-08-10 04:00:00", "2015-08-11 04:00:00", "2015-08-12 04:00:00", "2015-08-13 04:00:00", "2015-08-14 04:00:00", "2015-08-17 04:00:00", "2015-08-18 04:00:00", "2015-08-19 04:00:00", "2015-08-20 04:00:00", "2015-08-21 04:00:00", "2015-08-24 04:00:00", "2015-08-25 04:00:00", "2015-08-26 04:00:00", "2015-08-27 04:00:00", "2015-08-28 04:00:00", "2015-08-29 04:00:00", "2015-09-03 04:00:00", "2015-09-12 04:00:00", "2015-09-13 04:00:00", "2015-09-14 04:00:00", "2015-09-23 04:00:00", "2015-09-24 04:00:00", "2015-09-25 04:00:00", "2015-09-26 04:00:00", "2015-09-28 04:00:00", "2015-09-29 04:00:00", "2015-09-30 04:00:00", "2015-10-05 04:00:00", "2015-10-12 04:00:00", "2015-10-21 04:00:00", "2015-10-27 04:00:00", "2015-10-28 04:00:00", "2015-10-29 04:00:00", "2015-12-04 05:00:00", "2015-12-08 05:00:00", "2015-12-09 05:00:00", "2015-12-10 05:00:00", "2015-12-11 05:00:00", "2015-12-12 05:00:00", "2015-12-14 05:00:00", "2016-01-06 05:00:00", "2016-01-07 05:00:00", "2016-02-04 05:00:00", "2016-02-05 05:00:00", "2016-02-08 05:00:00", "2016-02-09 05:00:00", "2016-02-10 05:00:00", "2016-02-11 05:00:00", "2016-02-12 05:00:00", "2016-02-13 05:00:00", "2016-02-14 05:00:00", "2016-02-15 05:00:00", "2016-02-16 05:00:00", "2016-02-18 05:00:00", "2016-02-22 05:00:00", "2016-02-23 05:00:00", "2016-03-23 04:00:00", "2016-03-24 04:00:00", "2016-03-25 04:00:00", "2016-03-29 04:00:00", "2016-03-30 04:00:00", "2016-04-04 04:00:00", "2016-04-06 04:00:00", "2016-04-07 04:00:00", "2016-04-21 04:00:00", "2016-04-23 04:00:00", "2016-04-26 04:00:00", "2016-04-30 04:00:00", "2016-05-09 04:00:00", "2016-05-13 04:00:00", "2016-05-14 04:00:00", "2016-05-17 04:00:00", "2016-05-18 04:00:00", "2016-05-19 04:00:00", "2016-05-20 04:00:00", "2016-05-24 04:00:00", "2016-06-01 04:00:00", "2016-06-02 04:00:00", "2016-06-03 04:00:00", "2016-06-27 04:00:00", "2016-06-28 04:00:00", "2016-06-29 04:00:00", "2016-06-30 04:00:00", "2016-07-01 04:00:00", "2016-07-20 04:00:00", "2016-07-21 04:00:00", "2016-07-22 04:00:00", "2016-07-27 04:00:00", "2016-07-28 04:00:00", "2016-07-29 04:00:00", "2016-08-09 04:00:00", "2016-08-10 04:00:00", "2016-08-11 04:00:00", "2016-08-12 04:00:00", "2016-08-14 04:00:00", "2016-08-17 04:00:00", "2016-08-19 04:00:00", "2016-08-24 04:00:00", "2016-08-25 04:00:00", "2016-08-26 04:00:00", "2016-08-29 04:00:00", "2016-08-30 04:00:00", "2016-08-31 04:00:00", "2016-09-07 04:00:00", "2016-09-08 04:00:00", "2016-09-09 04:00:00", "2016-09-10 04:00:00", "2016-09-12 04:00:00", "2016-09-13 04:00:00", "2016-09-14 04:00:00", "2016-09-15 04:00:00", "2016-09-16 04:00:00", "2016-09-17 04:00:00", "2016-09-24 04:00:00", "2016-09-27 04:00:00", "2016-10-10 04:00:00", "2016-10-12 04:00:00", "2016-10-13 04:00:00", "2016-10-14 04:00:00", "2016-10-15 04:00:00", "2016-10-17 04:00:00", "2016-10-18 04:00:00", "2016-10-20 04:00:00", "2016-10-21 04:00:00", "2016-10-25 04:00:00", "2016-10-26 04:00:00", "2016-10-27 04:00:00", "2016-10-28 04:00:00", "2016-10-31 04:00:00", "2016-11-01 04:00:00", "2016-11-02 04:00:00", "2016-11-03 04:00:00", "2016-11-04 04:00:00", "2016-11-16 05:00:00", "2016-11-21 05:00:00", "2016-11-22 05:00:00", "2016-11-23 05:00:00", "2016-11-25 05:00:00", "2016-11-26 05:00:00", "2016-11-28 05:00:00", "2016-11-29 05:00:00", "2016-11-30 05:00:00", "2016-12-01 05:00:00", "2016-12-02 05:00:00", "2016-12-14 05:00:00", "2017-02-08 05:00:00", "2017-02-10 05:00:00", "2017-02-14 05:00:00", "2017-02-15 05:00:00", "2017-02-16 05:00:00", "2017-02-22 05:00:00", "2017-03-01 05:00:00", "2017-03-02 05:00:00", "2017-03-03 05:00:00", "2017-03-04 05:00:00", "2017-03-05 05:00:00", "2017-03-06 05:00:00", "2017-03-07 05:00:00", "2017-03-08 05:00:00", "2017-03-09 05:00:00", "2017-03-10 05:00:00", "2017-03-13 04:00:00", "2017-03-14 04:00:00", "2017-03-15 04:00:00", "2017-03-16 04:00:00", "2017-03-17 04:00:00", "2017-03-24 04:00:00", "2017-03-28 04:00:00", "2017-03-29 04:00:00", "2017-03-30 04:00:00", "2017-04-02 04:00:00", "2017-04-05 04:00:00", "2017-04-07 04:00:00", "2017-04-08 04:00:00", "2017-04-11 04:00:00", "2017-04-12 04:00:00", "2017-04-13 04:00:00", "2017-04-14 04:00:00", "2017-04-15 04:00:00", "2017-04-17 04:00:00", "2017-04-18 04:00:00", "2017-04-19 04:00:00", "2017-04-20 04:00:00", "2017-04-21 04:00:00", "2017-04-22 04:00:00", "2017-04-23 04:00:00", "2017-04-25 04:00:00", "2017-04-26 04:00:00", "2017-04-27 04:00:00", "2017-04-28 04:00:00", "2017-04-29 04:00:00", "2017-05-02 04:00:00", "2017-05-03 04:00:00", "2017-05-04 04:00:00", "2017-05-05 04:00:00", "2017-05-08 04:00:00", "2017-05-09 04:00:00", "2017-05-10 04:00:00", "2017-05-11 04:00:00", "2017-05-12 04:00:00", "2017-05-13 04:00:00", "2017-05-14 04:00:00", "2017-05-15 04:00:00", "2017-05-16 04:00:00", "2017-05-17 04:00:00", "2017-05-18 04:00:00", "2017-05-19 04:00:00", "2017-05-20 04:00:00", "2017-05-21 04:00:00", "2017-05-22 04:00:00", "2017-05-23 04:00:00", "2017-05-24 04:00:00", "2017-05-25 04:00:00", "2017-05-26 04:00:00", "2017-06-01 04:00:00", "2017-06-02 04:00:00", "2017-06-05 04:00:00", "2017-06-09 04:00:00", "2017-06-10 04:00:00", "2017-06-11 04:00:00", "2017-06-12 04:00:00", "2017-06-13 04:00:00", "2017-06-15 04:00:00", "2017-06-16 04:00:00", "2017-06-17 04:00:00", "2017-06-26 04:00:00", "2017-06-27 04:00:00", "2017-06-28 04:00:00", "2017-06-29 04:00:00", "2017-07-05 04:00:00", "2017-07-20 04:00:00", "2017-07-21 04:00:00", "2017-07-22 04:00:00", "2017-07-24 04:00:00", "2017-08-14 04:00:00", "2017-08-15 04:00:00", "2017-08-16 04:00:00", "2017-08-17 04:00:00", "2017-08-23 04:00:00", "2017-08-24 04:00:00", "2017-08-25 04:00:00", "2017-08-28 04:00:00", "2017-10-26 04:00:00", "2017-10-27 04:00:00", "2017-10-30 04:00:00", "2017-10-31 04:00:00", "2017-11-01 04:00:00", "2017-11-02 04:00:00", "2017-11-07 05:00:00", "2017-11-08 05:00:00", "2017-11-09 05:00:00", "2017-11-10 05:00:00", "2017-11-11 05:00:00", "2017-11-13 05:00:00", "2017-11-16 05:00:00", "2017-11-20 05:00:00", "2017-12-04 05:00:00", "2017-12-05 05:00:00", "2017-12-07 05:00:00", "2017-12-13 05:00:00", "2017-12-15 05:00:00", "2017-12-18 05:00:00", "2017-12-19 05:00:00", "2017-12-21 05:00:00", "2018-01-02 05:00:00", "2018-01-03 05:00:00", "2018-01-05 05:00:00", "2018-01-08 05:00:00", "2018-01-09 05:00:00", "2018-01-10 05:00:00", "2018-01-15 05:00:00", "2018-01-16 05:00:00", "2018-01-17 05:00:00", "2018-02-08 05:00:00", "2018-02-09 05:00:00", "2018-02-13 05:00:00", "2018-02-14 05:00:00", "2018-02-15 05:00:00", "2018-02-16 05:00:00", "2018-02-21 05:00:00", "2018-02-22 05:00:00", "2018-03-12 04:00:00", "2018-03-14 04:00:00", "2018-03-20 04:00:00", "2018-03-21 04:00:00", "2018-03-27 04:00:00", "2018-03-28 04:00:00", "2018-04-02 04:00:00", "2018-04-03 04:00:00", "2018-04-06 04:00:00", "2018-08-02 04:00:00", "2018-08-07 04:00:00", "2018-08-08 04:00:00", "2018-08-09 04:00:00", "2018-08-10 04:00:00", "2018-08-13 04:00:00", "2018-08-14 04:00:00", "2018-08-15 04:00:00", "2018-08-30 04:00:00", "2018-08-31 04:00:00", "2018-09-01 04:00:00", "2018-09-04 04:00:00", "2018-09-10 04:00:00", "2018-09-11 04:00:00", "2018-10-03 04:00:00", "2018-10-29 04:00:00", "2018-11-06 05:00:00", "2018-11-07 05:00:00", "2018-11-08 05:00:00", "2018-11-20 05:00:00", "2018-11-26 05:00:00", "2018-11-27 05:00:00", "2018-11-28 05:00:00", "2018-11-29 05:00:00", "2018-11-30 05:00:00", "2018-12-03 05:00:00", "2018-12-04 05:00:00", "2018-12-05 05:00:00", "2018-12-06 05:00:00", "2018-12-07 05:00:00", "2018-12-10 05:00:00", "2018-12-11 05:00:00", "2018-12-13 05:00:00"], "y": [1, 109, 113, 490, 718, 743, 749, 1325, 1326, 1327, 1842, 2401, 3817, 4782, 5797, 9562, 9635, 9642, 9649, 9653, 9720, 10322, 11361, 11605, 12324, 12751, 13446, 15575, 16209, 20213, 22570, 25865, 26969, 27048, 27921, 27986, 28054, 28670, 29329, 29888, 29928, 32001, 36042, 38051, 39156, 40380, 40791, 40834, 40869, 40870, 40872, 40883, 40885, 40886, 40887, 40888, 40906, 41134, 41451, 41778, 41804, 42317, 42906, 43016, 43018, 43727, 44127, 49567, 51576, 51629, 52041, 54351, 57101, 58933, 61249, 61314, 61320, 61323, 61324, 61325, 61526, 62054, 62061, 62067, 62072, 62131, 62536, 62559, 62560, 64389, 65036, 65858, 65862, 65865, 65866, 66004, 66034, 66189, 66252, 66269, 66270, 66296, 66297, 66659, 66660, 67843, 68038, 69565, 70269, 70588, 70740, 70878, 71179, 71280, 71709, 72859, 72871, 73376, 75348, 76322, 76864, 76865, 76868, 78757, 95361, 108381, 113213, 114401, 116228, 121252, 127532, 128354, 135884, 135885, 137794, 139995, 141543, 142526, 142527, 142544, 150890, 159271, 160049, 160052, 160057, 160059, 160062, 160075, 160077, 160080, 163062, 166106, 167640, 169329, 170325, 172784, 172800, 172804, 172981, 173979, 173980, 173992, 173993, 174046, 174194, 174205, 174374, 174375, 174376, 174387, 174614, 174617, 174618, 175510, 177803, 180157, 184078, 184864, 185071, 186614, 187744, 198159, 201294, 201303, 203003, 204841, 206226, 208419, 208432, 208434, 208486, 208583, 208681, 208722, 212407, 220207, 220607, 222328, 224703, 224715, 224734, 224735, 224899, 226888, 226941, 227114, 227116, 227265, 227298, 227308, 227337, 227345, 228722, 235376, 238997, 243619, 246720, 246722, 246759, 246761, 246762, 246777, 246786, 246789, 246895, 246910, 246951, 246960, 247163, 247365, 247475, 247502, 247808, 247937, 249543, 250689, 252801, 252806, 252808, 253753, 256542, 257544, 259076, 259917, 260524, 260623, 261019, 261118, 261316, 261318, 261420, 261474, 261481, 261495, 261496, 261501, 261505, 262607, 265956, 268022, 268023, 268180, 268290, 268492, 272799, 276049, 281555, 281664, 286380, 286433, 286977, 287827, 289205, 289777, 289778, 289780, 289804, 289838, 289840, 292989, 295807, 303582, 304271, 308295, 308541, 313367, 319328, 321961, 326479, 326694, 326695, 326696, 326700, 326705, 326707, 326718, 326726, 326741, 326849, 327043, 327075, 327121, 327224, 327379, 327734, 328136, 328479, 328680, 328747, 329239, 329876, 330438, 332321, 333855, 334189, 334700, 336143, 337157, 337183, 337191, 337196, 337404, 339131, 339235, 339635, 339783, 339849, 340828, 341071, 341315, 341622, 341653, 341674, 341679, 342044, 342726, 344390, 344396, 347769, 352027, 353162, 353165, 353295, 353559, 353634, 353640, 353856, 354158, 354260, 355321, 355787, 356541, 356564, 356587, 356621, 356648, 356659, 358230, 358233, 358238, 358240, 358245, 358967, 358979, 358992, 359312, 359368, 359370, 359371, 359387, 360061, 363036, 365319, 365321, 365325, 365347, 365436, 365600, 365657, 365658, 366919, 367403, 367408, 368980, 373490, 375083, 375084, 376097, 376557, 376560, 376570, 376625, 376652, 376666, 376765, 376878, 377014, 377068, 377411, 377654, 379689, 383585, 387381, 390972, 392856, 393755, 393792, 393830, 393833, 393834, 393835, 393850, 393882, 393884, 393885, 393886, 394145, 394289, 394455, 394770, 395593, 395652, 395661, 395689, 395705, 395706, 395713], "type": "scattergl", "uid": "2afe9e17-d549-4a05-95ed-70c9d5afb309"}], {"height": 600.0, "legend": {"x": 0.1, "y": 0.95}, "showlegend": true, "title": "How we collect data", "xaxis": {"title": "Time"}, "yaxis": {"title": "Number of scans"}}, {"showLink": false, "linkText": "", "plotlyServerURL": "https://plot.ly"})</script><script type="text/javascript">window.addEventListener("resize", function(){Plotly.Plots.resize(document.getElementById("b01d543c-a117-41d1-943d-7fdd17531616"));});</script></div>
MIT
notebooks/1_Lab scan collection.ipynb
laserkelvin/SlowFourierTransform
***KNN Classification***
from sklearn.neighbors import KNeighborsClassifier knc = KNeighborsClassifier(n_neighbors = 17) X,y = credit.loc[:,credit.columns != 'Class'], credit.loc[:,'Class'] knc.fit(X_train,y_train) y_knc = knc.predict(X_test) print('accuracy of training set: {:.4f}'.format(knc.score(X_train,y_train))) print('accuracy of test set: {:.4f}'.format(knc.score(X_test, y_test))) from sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score, precision_recall_curve print('confusion_matrix of KNN: ', confusion_matrix(y_test, y_knc)) print('precision_score of KNN: ', precision_score(y_test, y_knc)) print('recall_score of KNN: ', recall_score(y_test, y_knc)) print('precision_recall_curve: ', precision_recall_curve(y_test, y_knc))
confusion_matrix of KNN: [[71070 12] [ 26 94]] precision_score of KNN: 0.8867924528301887 recall_score of KNN: 0.7833333333333333 precision_recall_curve: (array([0.00168535, 0.88679245, 1. ]), array([1. , 0.78333333, 0. ]), array([0, 1]))
MIT
dataset/creditCard/Credit Card.ipynb
Necropsy/XXIIISI-Minicurso
**Random Forest Regression**
from sklearn.ensemble import RandomForestRegressor reg = RandomForestRegressor(n_estimators = 20, random_state = 0) reg.fit(X_train,y_train) y_rfr = reg.predict(X_test) reg.score(X_test, y_test) print('accuracy of training set: {:.4f}'.format(reg.score(X_train,y_train))) print('accuaracy of test set: {:.4f}'.format(reg.score(X_test, y_test))) print('accuracy_score of decision tree regression: ', accuracy_score( y_dtr , y_test)) print('confusion_matrix of decision tree regression: ', confusion_matrix(y_dtr, y_test)) print('precision_score of decision tree regression: ', precision_score( y_dtr, y_test)) print('recall_score of decision tree regression: ', recall_score( y_dtr, y_test)) print('precision_recall_curve: ', precision_recall_curve(y_dtr, y_test))
accuracy_score of decision tree regression: 0.999283727985169 confusion_matrix of decision tree regression: [[71061 30] [ 21 90]] precision_score of decision tree regression: 0.75 recall_score of decision tree regression: 0.8108108108108109 precision_recall_curve: (array([0.00155894, 0.75 , 1. ]), array([1. , 0.81081081, 0. ]), array([0, 1]))
MIT
dataset/creditCard/Credit Card.ipynb
Necropsy/XXIIISI-Minicurso
**Decision Tree Regression**
from sklearn.tree import DecisionTreeRegressor regs = DecisionTreeRegressor(random_state = 0) regs.fit(X_train, y_train) y_dtr = regs.predict(X_test) regs.score(X_test, y_test) print('accuracy of training set: {:.4f}'.format(regs.score(X_train,y_train))) print('accuaracy of test set: {:.4f}'.format(regs.score(X_test, y_test))) print('accuracy_score of decision tree regression: ', accuracy_score( y_dtr , y_test)) print('confusion_matrix of decision tree regression: ', confusion_matrix(y_dtr, y_test)) print('precision_score of decision tree regression: ', precision_score( y_dtr, y_test)) print('recall_score of decision tree regression: ', recall_score( y_dtr, y_test)) print('precision_recall_curve: ', precision_recall_curve(y_dtr, y_test))
accuracy_score of decision tree regression: 0.999283727985169 confusion_matrix of decision tree regression: [[71061 30] [ 21 90]] precision_score of decision tree regression: 0.75 recall_score of decision tree regression: 0.8108108108108109 precision_recall_curve: (array([0.00155894, 0.75 , 1. ]), array([1. , 0.81081081, 0. ]), array([0, 1]))
MIT
dataset/creditCard/Credit Card.ipynb
Necropsy/XXIIISI-Minicurso
**Logistic Regression**
from sklearn.linear_model import LogisticRegression logreg = LogisticRegression(random_state = 0) logreg.fit(X_train, y_train) y_lr = logreg.predict(X_test) logreg.score(X_test, y_test) print('accuracy of training set: {:.4f}'.format(logreg.score(X_train,y_train))) print('accuaracy of test set: {:.4f}'.format(logreg.score(X_test, y_test))) print('accuracy_score of logistic regression : ', accuracy_score(y_test, y_lr)) print('confusion_matrix of logistic regression: ', confusion_matrix(y_test, y_lr)) print('precision_score of logistic regression: ', precision_score(y_test, y_lr)) print('recall_score of logistic regression: ', recall_score(y_test, y_lr)) print('precision_recall_curve: ', precision_recall_curve(y_test, y_lr)) logreg100 = LogisticRegression(random_state = 1000, C =100) logreg100.fit(X_train, y_train) y_lr100 = logreg100.predict(X_test) logreg100.score(X_test, y_test) print('accuracy of training set: {:.4f}'.format(logreg100.score(X_train,y_train))) print('accuaracy of test set: {:.4f}'.format(logreg100.score(X_test, y_test))) logreg01 = LogisticRegression(random_state = 0, C =0.001) logreg01.fit(X_train, y_train) y_p01 = logreg01.predict(X_test) logreg01.score(X_test, y_test) print('accuracy of training set: {:.4f}'.format(logreg01.score(X_train,y_train))) print('accuaracy of test set: {:.4f}'.format(logreg01.score(X_test, y_test)))
accuracy of training set: 0.9990 accuaracy of test set: 0.9991
MIT
dataset/creditCard/Credit Card.ipynb
Necropsy/XXIIISI-Minicurso
**Decision Tree Classification**
from sklearn.tree import DecisionTreeClassifier classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0) classifier.fit(X_train, y_train) y_dtc = classifier.predict(X_test) classifier.score(X_test, y_test) print('accuracy of training set: {:.4f}'.format(classifier.score(X_train,y_train))) print('accuaracy of test set: {:.4f}'.format(classifier.score(X_test, y_test))) classifier = DecisionTreeClassifier(max_depth = 4, random_state = 42) classifier.fit(X_train,y_train) print('accuracy of training set: {:.4f}'.format(classifier.score(X_train,y_train))) print('accuaracy of test set: {:.4f}'.format(classifier.score(X_test, y_test))) print('accuracy_score of decesion tree classifier: ', accuracy_score(y_dtc, y_test)) print('confusion_matrix of decision tree classifier: ', confusion_matrix(y_dtc, y_test)) print('precision_score of decision tree classifier: ', precision_score(y_dtc, y_test)) print('recall_score of decision tree classifier: ', recall_score(y_dtc, y_test)) print('precision_recall_curve of decision tree classifier: ', precision_recall_curve(y_dtc, y_test))
accuracy_score of decesion tree classifier: 0.9991994606893064 confusion_matrix of decision tree classifier: [[71048 23] [ 34 97]] precision_score of decision tree classifier: 0.8083333333333333 recall_score of decision tree classifier: 0.7404580152671756 precision_recall_curve of decision tree classifier: (array([0.00183984, 0.80833333, 1. ]), array([1. , 0.74045802, 0. ]), array([0, 1]))
MIT
dataset/creditCard/Credit Card.ipynb
Necropsy/XXIIISI-Minicurso
**Naive Bayes Classification**
from sklearn.naive_bayes import GaussianNB NBC = GaussianNB() NBC.fit(X_train, y_train) y_nb = NBC.predict(X_test) NBC.score(X_test, y_test) print('accuracy of training set: {:.4f}'.format(NBC.score(X_train,y_train))) print('accuaracy of test set: {:.4f}'.format(NBC.score(X_test, y_test))) print('accuracy_score of Naive Bayes: ', accuracy_score(y_test, y_nb)) print('confusion_matrix of Naive Bayes: ', confusion_matrix(y_test, y_nb)) print('precision_score of Naive Bayes: ', precision_score(y_test, y_nb)) print('recall_score of Naive Bayes: ', recall_score(y_test, y_nb)) print('precision_recall_curve of Naive Bayes: ', precision_recall_curve(y_test, y_nb))
accuracy_score of Naive Bayes: 0.9784697059071374 confusion_matrix of Naive Bayes: [[69569 1513] [ 20 100]] precision_score of Naive Bayes: 0.06199628022318661 recall_score of Naive Bayes: 0.8333333333333334 precision_recall_curve of Naive Bayes: (array([0.00168535, 0.06199628, 1. ]), array([1. , 0.83333333, 0. ]), array([0, 1]))
MIT
dataset/creditCard/Credit Card.ipynb
Necropsy/XXIIISI-Minicurso
#@title !git clone https://github.com/hiren14/World-health-organization-WHO-GUIDELINES-SYSTEM # clone %cd World-health-organization-WHO-GUIDELINES-SYSTEM %pip install -qr requirements.txt # install #@title import torch import utils display = utils.notebook_init() # checks !python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/bus.jpg !python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/img1.jpg !python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/img2.jpg !python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/img4.jpg !python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/img3.jpg
detect: weights=['yolov5s.pt'], source=data/images/img3.jpg, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False YOLOv5 🚀 3febfe4 torch 1.10.0+cu111 CPU Fusing layers... Model Summary: 213 layers, 7225885 parameters, 0 gradients image 1/1 /content/World-health-organization-WHO-GUIDELINES-SYSTEM/data/images/img3.jpg: 448x640 12 persons, 1 handbag, Done. (0.297s) Speed: 2.7ms pre-process, 296.8ms inference, 1.3ms NMS per image at shape (1, 3, 640, 640) Results saved to runs/detect/exp5
MIT
World_health_organization_WHO_GUIDELINES_SYSTEM.ipynb
hiren14/World-health-organization-WHO-GUIDELINES-SYSTEM
I once had a coworker tasked with creating a web-based dashboard. Unfortunately, the data he needed to log and visualize came from this binary application that didn't have any sort of documented developer api -- it just printed everything to stdout -- that he didn't have the source code for either. It was basically a black box that he had to write a Python wrapper around, using the [subprocess](https://docs.python.org/3/library/subprocess.html) module.His wrapper basically worked as such:1. Run the binary in a subprocess2. Write an infinite loop that with each iteration attempts to... - capture each new line as it's printed from the subprocess - marshal the line into some form of structured data i.e. dictionary - log the information in the data structure A Synchronous Example
from subprocess import Popen, PIPE import logging; logging.getLogger().setLevel(logging.INFO) import sys import time import json PROG = """ import json import time from datetime import datetime while True: data = { 'time': datetime.now().strftime('%c %f milliseconds'), 'string': 'hello, world', } print(json.dumps(data)) """ with Popen([sys.executable, '-u', '-c', PROG], stdout=PIPE) as proc: last_line = '' start_time, delta = time.time(), 0 while delta < 5: # only loop for 5 seconds line = proc.stdout.readline().decode() # pretend marshalling the data takes 1 second data = json.loads(line); time.sleep(1) if line != last_line: logging.info(data) last_line = line delta = time.time() - start_time
INFO:root:{'time': 'Mon Sep 25 16:16:21 2017 690000 milliseconds', 'string': 'hello, world'} INFO:root:{'time': 'Mon Sep 25 16:16:21 2017 690084 milliseconds', 'string': 'hello, world'} INFO:root:{'time': 'Mon Sep 25 16:16:21 2017 690111 milliseconds', 'string': 'hello, world'} INFO:root:{'time': 'Mon Sep 25 16:16:21 2017 690131 milliseconds', 'string': 'hello, world'} INFO:root:{'time': 'Mon Sep 25 16:16:21 2017 690149 milliseconds', 'string': 'hello, world'}
MIT
notebooks/Wrapping Subprocesses in Asyncio.ipynb
knowsuchagency/knowsuchagency.github.io.old
The problem The problem my coworker had is that in the time he marshaled one line of output of the program and logged the information, several more lines had already been printed by the subprocess. His wrapper simply couldn't keep up with the subprocess' output.Notice in the example above, that although many more lines have obviously been printed from the program, we only capture the first few since our subprocess "reads" new lines more slowly than they're printed. The solution- asyncioInstead of writing our own infinite loop, what if we had a loop that would allow us to run a subprocess and intelligently poll it to determine when a new line was ready to be read, yielding to the main thread to do other work if not?What if that same event loop allowed us to delegate the process of marshaling the json output to a ProcessPoolExecutor?What if this event loop was written into the Python standard library? Well... printer.py This program simply prints random stuff to stdout on an infinite loop```python printer.py print to stdout in infinite loopfrom datetime import datetimefrom pathlib import Pathfrom time import sleepfrom typing import Listimport randomimport jsonimport osdef get_words_from_os_dict() -> List[str]: p1 = Path('/usr/share/dict/words') mac os p2 = Path('/usr/dict/words') debian/ubuntu words: List[str] = [] if p1.exists: words = p1.read_text().splitlines() elif p2.exists: words = p2.read_text().splitlines() return wordsdef current_time() -> str: return datetime.now().strftime("%c")def printer(words: List[str] = get_words_from_os_dict()) -> str: random_words = ':'.join(random.choices(words, k=random.randrange(2, 5))) if words else 'no OS words file found' return json.dumps({ 'current_time': current_time(), 'words': random_words })while True: seconds = random.randrange(5) print(f'{__file__} in process {os.getpid()} waiting {seconds} seconds to print json string') sleep(seconds) print(printer())``` An Asynchronous ExampleThis program wraps printer.py in a subprocess. It then delegates the marshaling of json to another process using the event loop's [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.htmlasyncio.AbstractEventLoop.run_in_executor) method, and prints the results to the screen.
#!/usr/bin/env python3 # # Spawns multiple instances of printer.py and attempts to deserialize the output # of each line in another process and print the result to the screen, import typing as T import asyncio.subprocess import logging import sys import json from concurrent.futures import ProcessPoolExecutor, Executor from functools import partial from contextlib import contextmanager @contextmanager def event_loop() -> asyncio.AbstractEventLoop: loop = asyncio.get_event_loop() # default asyncio event loop executor is # ThreadPoolExecutor which is usually fine for IO-bound # tasks, but bad if you need to do computation with ProcessPoolExecutor() as executor: loop.set_default_executor(executor) yield loop loop.close() print('\n\n---loop closed---\n\n') # any `async def` function is a coroutine async def read_json_from_subprocess( loop: asyncio.AbstractEventLoop = asyncio.get_event_loop(), executor: T.Optional[Executor] = None ) -> None: # wait for asyncio to initiate our subprocess process: asyncio.subprocess.Process = await create_process() while True: bytes_ = await process.stdout.readline() string = bytes_.decode('utf8') # deserialize_json is a function that # we'll send off to our executor deserialize_json = partial(json.loads, string) try: # run deserialize_json in the loop's default executor (ProcessPoolExecutor) # and wait for it to return output = await loop.run_in_executor(executor, deserialize_json) print(f'{process} -> {output}') except json.decoder.JSONDecodeError: logging.error('JSONDecodeError for input: ' + string.rstrip()) def create_process() -> asyncio.subprocess.Process: return asyncio.create_subprocess_exec( sys.executable, '-u', 'printer.py', stdout=asyncio.subprocess.PIPE ) async def run_for( n: int, loop: asyncio.AbstractEventLoop = asyncio.get_event_loop() ) -> None: """ Return after a set amount of time, cancelling all other tasks before doing so. """ start = loop.time() while True: await asyncio.sleep(0) if abs(loop.time() - start) > n: # cancel all other tasks for task in asyncio.Task.all_tasks(loop): if task is not asyncio.Task.current_task(): task.cancel() return with event_loop() as loop: coroutines = (read_json_from_subprocess() for _ in range(5)) # create Task from coroutines and schedule # it for execution on the event loop asyncio.gather(*coroutines) # this returns a Task and schedules it implicitly loop.run_until_complete(run_for(5))
_____no_output_____
MIT
notebooks/Wrapping Subprocesses in Asyncio.ipynb
knowsuchagency/knowsuchagency.github.io.old
Week 3 Inroduction Date: 21 Oct 2021Last week you learned about different methods for segmenting an image into regions of interest. In this session you will get some experience coding image segmentation algorithms. Your task will be to code a simple statistical method that uses k-means clustering.
import numpy as np import copy import cv2 import matplotlib.image as mpimg from matplotlib import pyplot as plt %matplotlib inline #to visualize the plots within the notebook
UsageError: unrecognized arguments: #to visualize the plots within the notebook
MIT
labs/week_3.ipynb
Meewnicorn/ImPro26
K-means SegmentationK-means clustering is a well-known approach for separating data (often of high dimensionality) intodifferent groups depending on their distance. In the case of images this is a useful method forsegmenting an image into regions, provided that the number of regions (k) is known in advance. It isbased on the fact that pixels belonging to the same region will most likely have similar intensities. The algorithm is:a) Given the number of clusters, k, initialise their centres to some values.b) Go over the pixels in the image and assign each one to its closest cluster according to its distance to the centre of the cluster.c) Update the cluster centres to be the average of the pixels added.d) Repeat steps b and c until the cluster centres do not get updated anymore. 1. Use the k-means function in sklearn and see resultsFirst, you can use the built-in kmeans function in sklearn and see the results. You can figure out how to this from the specification: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html Load imageImportant Note: Don't forget to convert the image to float representation.
# Load image and conver to float representation raw_img = cv2.imread("../images/sample_image.jpg") # change file name to load different images raw_gray_img = cv2.cvtColor(raw_img, cv2.COLOR_BGR2GRAY) img = raw_img.astype(np.float32) / 255. gray_img = raw_gray_img.astype(np.float32) / 255. plt.subplot(1, 2, 1) plt.imshow(img) plt.subplot(1, 2, 2) plt.imshow(gray_img, "gray")
_____no_output_____
MIT
labs/week_3.ipynb
Meewnicorn/ImPro26
Results on Gray-scale Image
from sklearn.cluster import KMeans # write your code here
[[0.762243 ] [0.28945854]]
MIT
labs/week_3.ipynb
Meewnicorn/ImPro26
Results on RGB image
# write your code here
[[0.82212555 0.7523794 0.7282207 ] [0.2380477 0.32608324 0.22135933]]
MIT
labs/week_3.ipynb
Meewnicorn/ImPro26
2. Implement your own k-meansNow you need to implement your own k-means function. Use your function on different greyscale images and try comparing the results to the results you get from sklearn kmeans function. Implement your own functions here:
def my_kmeans(I, k): """ Parameters ---------- I: the image to be segmented (greyscale to begin with) H by W array k: the number of clusters (use a simple image with k=2 to begin with) Returns ---------- clusters: a vector that contains the final cluster centres L: an array the same size as the input image that contains the label for each of the image pixels, according to which cluster it belongs """ assert len(I.shape) == 2, "Wrong input dimensions! Please make sure you are using a gray-scale image!" # Write your code here: return clusters, L def my_kmeans_rgb(I, k): """ Parameters ---------- I: the image to be segmented (greyscale to begin with) H by W array k: the number of clusters (use a simple image with k=2 to begin with) Returns ---------- clusters: a vector that contains the final cluster centres L: an array the same size as the input image that contains the label for each of the image pixels, according to which cluster it belongs """ assert len(I.shape) == 3, "Wrong input dimensions! Please make sure you are using a RGB image!" # Write your code here: return clusters, L
_____no_output_____
MIT
labs/week_3.ipynb
Meewnicorn/ImPro26
Show results here:
centroids, labels = my_kmeans(gray_img, 2) print(centroids) plt.imshow(labels)
[0.28945825 0.76224351]
MIT
labs/week_3.ipynb
Meewnicorn/ImPro26
More things to try out:1. Try different values for k. For k > 2 you will need some way to display the output L (other than simple black and white). Consider using a colour map with the imshow function.2. Adapt your function so that it will handle colour images as well. What changes do you have to make?
# k=3 centroids, labels = my_kmeans_vec(gray_img, 3) plt.imshow(labels) print(centroids) centroids, labels = my_kmeans_rgb(img, 2) plt.imshow(labels) print(centroids)
[[0.23840699 0.32619616 0.22162087] [0.82255203 0.75285155 0.72873324]]
MIT
labs/week_3.ipynb
Meewnicorn/ImPro26
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs