markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
- `JSON` turns any JSON-able `dict` into an expandable, filterable widget | tweet._json
JSON(tweet._json) | _____no_output_____ | MIT | 1-output.ipynb | fndari/nbpy-top-tweeters |
- `Image` generates an image from raw PNG data, a file path, or a URL | Image(tweet.user.profile_image_url) | _____no_output_____ | MIT | 1-output.ipynb | fndari/nbpy-top-tweeters |
- `Markdown` can be used to generate rich text programmatically in a cell's output | Markdown(f"""
*{tweet.user.name}* (`@{tweet.user.screen_name}`) is tweeting about **North Bay Python**!
""") | _____no_output_____ | MIT | 1-output.ipynb | fndari/nbpy-top-tweeters |
- `HTML` is able to render arbitrary HTML code | HTML('<a class="twitter-timeline" href="https://twitter.com/northbaypython?ref_src=twsrc%5Etfw">Tweets by northbaypython</a> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>'); | _____no_output_____ | MIT | 1-output.ipynb | fndari/nbpy-top-tweeters |
- `FileLink` generates a "smart" link to a file, relative to the notebook's working directory | FileLink('hey-nbpy.md') | _____no_output_____ | MIT | 1-output.ipynb | fndari/nbpy-top-tweeters |
--- We can use these building block to create rich representations and associate them with any object- Strategy 1: Register custom formatters for object types | def tweet_as_markdown(tweet):
quoted_text = '\n'.join(f'> {line}' for line in tweet.text.split('\n'))
author = f'--*{tweet.user.name}* (`@{tweet.user.screen_name}`) on {tweet.created_at}'
return quoted_text + '\n\n' + author
formatters = get_ipython().display_formatter.formatters
formatters['text/markdown'].for_type(tweepy.Status, tweet_as_markdown)
tweet | _____no_output_____ | MIT | 1-output.ipynb | fndari/nbpy-top-tweeters |
- Strategy 2: Implement `_repr_*_()` methods for custom classes Notes:- Let's say we want to move the tweet-counting code to its own class | class ScoreBoard:
def __init__(self, items, display_top=5):
self._items = items
self.display_top = display_top
@property
def counts_by_name(self):
return Counter(self._items)
@property
def to_display(self):
return self.counts_by_name.most_common(self.display_top)
def _repr_markdown_(self):
# effectively we're using this
lines = [
f'# [North Bay Python 2019](https://2019.northbaypython.org) Top {self.display_top} Tweeters',
'| name | # tweets |',
'|-|-|',
]
for name, count in self.to_display:
lines.append(f'| {name} | {count} |')
return '\n'.join(lines)
ScoreBoard(tweet_count_by_username, display_top=10) | _____no_output_____ | MIT | 1-output.ipynb | fndari/nbpy-top-tweeters |
- Rich output is rendered automatically when the object is the return value of a cell - Tip: use a `;` at the end of the last line in the cell to render nothing instead- Use the `display()` function to show rich output from anywhere in a cell (e.g. in a loop) - `display()` is versatile; falls back to text repr in a console | for tweet in nbpy_tweets[:10]:
display(tweet)
nbpy_tweets = api.search('#nbpy', count=100000)
import pickle
with open('nbpy_tweets.pkl', 'wb') as f:
pickle.dump(nbpy_tweets, f)
with open('nbpy_tweets.pkl', 'rb') as f:
unpickled = pickle.load(f)
len(unpickled) | _____no_output_____ | MIT | 1-output.ipynb | fndari/nbpy-top-tweeters |
Tutorial Title A brief introduction to the tutorial that describes:- The problem that that the tutorial addresses- Who the intended audience is- The expected experience level of that audience with a concept or tool - Which environment/language it runs in If there is another similar tutorial that's more appropriate for another audience, direct the reader there with a linked reference. How to Use This Tutorial A brief explanation of how the reader can use the tutorial. Can the reader copy each code snippet into a Python or other environment? Or can the reader run `` before or after reading through the explanations to understand how the code works? You can use this tutorial by *insert method(s) here*. A bulleted list of the tasks the reader will accomplish and skills he or she will learn. Begin each list item with a noun (Learn, Create, Use, etc.).You will accomplish the following:- First task or skill- Second task or skill- X task or skill Prerequisites Provide a *complete* list of the software, hardware, knowledge, and skills required to be successful using the tutorial. For each item, link the item to installation instructions, specs, or skill development tools, as appropriate. If good installation instructions aren't available for required software, start the tutorial with instructions for installing it. To complete this tutorial, you need:- [MXNet](https://mxnet.incubator.apache.org/install/overview)- [Language](https://mxnet.incubator.apache.org/tutorials/)- [Tool](https://mxnet.incubator.apache.org/api/python/index.html)- [Familiarity with concept or tool](https://gluon.mxnet.io/) The Data Provide a link to where the data is hosted and explain how to download it. If it requires more than two steps, use a numbered list. You can download the data used in this tutorial from the [Site Name](http://) site. To download the data:1. At the `` prompt, type: ``2. Second task.3. Last task. Briefly describe key aspects of the data. If there are two or more aspects of the data that require involved discussion, use subheads ( ``). To include a graphic, introduce it with a brief description and use the image linking tool to include it. Store the graphic in GitHub and use the following format: . You do not need to provide a title for your graphics. The data *add description here. (optional)* (Optional) Concept or Component Name If concepts or components need further introduction, include this section. If there are two or more aspects of the concept or component that require involved discussion, use subheads ( Concept or Sub-component Name). | ## Prepare the Data | _____no_output_____ | Apache-2.0 | example/MXNetTutorialTemplate.ipynb | simonmaurer/BMXNet-v2 |
Navigation functionalityThe Navigation functionality (`veneer.navigation`) lets you query and modify the current Source model using normal Python object notation. For example:```scenario.Network.Nodes[0].Name = 'Renamed node'```This notebook introduces the navigation functionality, including how it ties into the existing functionality in `v.model` as well as current limitations. SetupStart up Veneer as per usual... | import veneer
v = veneer.Veneer(19876)
v.status() | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
Initialise a Queryable object`Queryable` is the main component in `veneer.navigate`. By default, a Queryable points to the scenario. | from veneer.navigate import Queryable
scenario = Queryable(v)
scenario.Name | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
Tab completion`Queryable` objects work with the tab completion in IPython/Jupyter, including, in many cases, for nested objects: | scenario.Network.nodes.Count | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
... However this won't work after indexing into a list or dictionary:You can still access the properties of the list item, if you know what they're called: | scenario.Network.nodes[0].Name | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
... But if you want tab completion, create a variable **and run the cell that creates it**. Then tab into the new variable: | node = scenario.Network.nodes[0]
# node.<tab> WON'T WORK YET. You need to run this cell first
# Now tab completion should work should work
node.Name | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
Accessing a particular node/link/fu/etcThe above examples start from the scenario - which works, but is tedious when you need a particular node or link (or all water users, etc).Here, the existing functionality under `v.model` is has been expanded to return Queryable objects that can be used for object navigation.All of the relevant `v.model` areas (link, node, catchment, etc) now have a `nav_first` method, which accepts the same query parameters as the other operations. For example, `v.model.node.nav_first` accepts the `nodes` and `node_types` query parameters, in the same way as `v.model.node.get_param_values`.As always, the query parameters are available in the help, one level up: | v.model.node? | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
... So we can get a particular node, for navigation as follows. | callide = v.model.node.nav_first(nodes='Callide Dam') | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
**Note:** The function is called `nav_first` to emphasise that you will receive the **first** match for the query. So, if you query `node_types='WaterUser'`, you'll get the first Water User matched.It's likely that we'll add a more generic `nav` method at some point that allows you to get a Queryable capable of bulk operations. Working with the navigation objectFrom the examples above, it looks like, you can work with the `Queryable` object as a normal Python object. In *some* cases you can, but not always.For changing values in the object (ie changing the relevant property in the Source model), you can indeed set the value directly: | callide.Name
callide.fullSupplyLevel
callide.fullSupplyVolume
callide.fullSupplyVolume = 136300000
callide.fullSupplyVolume | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
**Note:** Changing one thing may have a side effect, impacting another property: | callide.fullSupplyLevel | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
If a property can't be set this way, it *should* tell you: | # CAUSES EXCEPTION
#callide.fullSupplyLevel = 216 | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
Things aren't necessarily as they seem!The above examples suggest that the following would work: | # Would be nice, but doesn't work...
#callide.fullSupplyVolume = 1.1 * callide.fullSupplyVolume | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
The reason is, `callide.fullSupplyVolume` is, itself, a `Queryable` object: | callide.fullSupplyVolume.__class__ | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
So, although it prints out as a number, it is in fact a reference to the value within the model.If you want to actually use the *value* in an expression (eg to set another property), you'll need to use the ` | callide.fullSupplyVolume = 1.1 * callide.fullSupplyVolume._eval_()
callide.fullSupplyVolume
callide.fullSupplyVolume = callide.fullSupplyVolume._eval_() / 1.1
callide.fullSupplyVolume | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
Evaluating a QueryableIt's not ideal to need to call `._eval_()` and we plan to improve this over time.Also, the `_eval_()` workaround only works for simple types - numbers, strings, booleans, etc. It doesn't work for complex objects, such as Nodes, Links and model instances.For example you CANNOT do this at the moment:```storage = v.model.node.nav_first(nodes='my storage')link = v.model.link.nav_first(links='some link') Try to set the outlet link for the storage...storage.OutletPaths[0] = link WILL NOT WORK!``` Bulk changesWhen you use the model configuration functionality under `v.model` every operation is a **bulk** operation by default - you use query parameters to limit the application:For example, the following would retrieve Easting and Northing (really, just the node coordinates) for every node:```eastings = v.model.node.get_param_values('Node.location.E')northings = v.model.node.get_param_values('Node.location.N')```while the following would do the same for only storages, by using query parameters:```eastings = v.model.node.get_param_values('Node.location.E',node_types='Storage')northings = v.model.node.get_param_values('Node.location.N',node_types='Storage')```When using `Queryable`, everything is, currently, an operation on a single object (eg a single node): | callide.Node.location.E | _____no_output_____ | ISC | doc/examples/navigation/0-Introduction.ipynb | flowmatters/veneer-py |
Generate a SWOT swath.This example lets us understand how to initialize the simulation parameters (error, SSH interpolation, orbit), generate an orbit, generate a swath, interpolate the SSH, and simulate the measurement errors. Finally, we visualize the simulated data. Simulation setupThe configuration is defined using an associative dictionary between the expected parameters and the values of its parameters. The description of the parameters is available on the [online help](https://swot-simulator.readthedocs.io/en/latest/generated/swot_simulator.settings.Parameters.htmlswot_simulator.settings.Parameters).This array can be loaded from a Python file using the [eval_config_file](https://swot-simulator.readthedocs.io/en/latest/generated/swot_simulator.settings.eval_config_file.htmlswot_simulator.settings.eval_config_file) method. But it can also be declared directly in Python code. | import swot_simulator
configuration = dict(
# The swath contains in its centre a central pixel divided in two by the
# reference ground track.
central_pixel=True,
# Distance, in km, between two points along track direction.
delta_al=2.0,
# Distance, in km, between two points across track direction.
delta_ac=2.0,
# Distance, in km, between the nadir and the center of the first pixel of the
# swath
half_gap=2.0,
# Distance, in km, between the nadir and the center of the last pixel of the
# swath
half_swath=70.0,
# Limits of SWOT swath requirements. Measurements outside the span will be
# set with fill values.
requirement_bounds=[10, 60],
# Ephemeris file to read containing the satellite's orbit.
ephemeris=swot_simulator.DATA.joinpath(
"ephemeris_calval_june2015_ell.txt"),
# Generation of measurement noise.
noise=[
'altimeter',
'baseline_dilation',
'karin',
'roll_phase',
#'orbital', (This generator consumes too much memory to run with binder)
'timing',
'wet_troposphere',
],
# File containing spectrum of instrument error
error_spectrum=swot_simulator.DATA.joinpath("error_spectrum.nc"),
# KaRIN file containing spectrum for several SWH
karin_noise=swot_simulator.DATA.joinpath("karin_noise_v2.nc"),
# The plug-in handling the SSH interpolation under the satellite swath.
#ssh_plugin = TODO
) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
We create the parameter object for our simulation. | import swot_simulator.settings
parameters = swot_simulator.settings.Parameters(configuration) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
---**Note**The [Parameter](https://swot-simulator.readthedocs.io/en/latest/generated/swot_simulator.settings.Parameters.htmlswot_simulator.settings.Parameters) class exposes the [load_default](https://swot-simulator.readthedocs.io/en/latest/generated/swot_simulator.settings.Parameters.load_default.htmlswot_simulator.settings.Parameters.load_default) method returning the default parameters of the simulation:```pythonparameters = swot_simulator.settings.Parameters.load_default()```It is also possible to [automatically load thedictionary](https://swot-simulator.readthedocs.io/en/latest/generated/swot_simulator.settings.template.html)containing the simulation parameters to adapt them to your needs and after allcreate the parameters of your simulation.```pythonconfiguration = swot_simulator.settings.template(python=True)parameters = swot_simulator.settings.Parameters(configuration)```--- SSH interpolationThe written configuration allows us to simulate a swath. However, the interpolation of the SSH under the satellite swath remains undefined. If you don't need this parameter, you can skip this setting.For our example, we use the SSH of the CMEMS grids provided on the Pangeo site. | import intake
cat = intake.open_catalog("https://raw.githubusercontent.com/pangeo-data/"
"pangeo-datastore/master/intake-catalogs/master.yaml")
ds = cat.ocean.sea_surface_height.to_dask()
ds | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
To interpolate SSH, we need to implement a class that must define a method tointerpolate the data under the swath. This class must be derived from the[CartesianGridHandler](https://swot-simulator.readthedocs.io/en/latest/generated/swot_simulator.plugins.data_handler.CartesianGridHandler.html)class to be correctly taken into account by the class managing the parameters. | import pyinterp.backends.xarray
import numpy
import xarray
#
import swot_simulator.plugins.data_handler
class CMEMS(swot_simulator.plugins.data_handler.CartesianGridHandler):
"""
Interpolation of the SSH AVISO (CMEMS L4 products).
"""
def __init__(self, adt):
self.adt = adt
ts = adt.time.data
assert numpy.all(ts[:-1] <= ts[1:])
# The frequency between the grids must be constant.
frequency = set(numpy.diff(ts.astype("datetime64[s]").astype("int64")))
if len(frequency) != 1:
raise RuntimeError(
"Time series does not have a constant step between two "
f"grids: {frequency} seconds")
# The frequency is stored in order to load the grids required to
# interpolate the SSH.
self.dt = numpy.timedelta64(frequency.pop(), 'ns')
def load_dataset(self, first_date, last_date):
"""Loads the 3D cube describing the SSH in time and space."""
if first_date < self.adt.time[0] or last_date > self.adt.time[-1]:
raise IndexError(
f"period [{first_date}, {last_date}] is out of range: "
f"[{self.adt.time[0]}, {self.adt.time[-1]}]")
first_date = self.adt.time.sel(time=first_date, method='pad')
last_date = self.adt.time.sel(time=last_date, method='backfill')
selected = self.adt.loc[dict(time=slice(first_date, last_date))]
selected = selected.compute()
return pyinterp.backends.xarray.Grid3D(selected.adt)
def interpolate(self, lon, lat, time):
"""Interpolate the SSH to the required coordinates"""
interpolator = self.load_dataset(time.min(), time.max())
ssh = interpolator.trivariate(dict(longitude=lon,
latitude=lat,
time=time),
interpolator='bilinear')
return ssh | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Now we can update our parameters. | parameters.ssh_plugin = CMEMS(ds) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Initiating orbit propagator.Initialization is simply done by [loading](https://swot-simulator.readthedocs.io/en/latest/generated/swot_simulator.orbit_propagator.load_ephemeris.htmlswot_simulator.orbit_propagator.load_ephemeris) the ephemeris file. The satellite's one-day pass is taken into account in this case. | import swot_simulator.orbit_propagator
with open(parameters.ephemeris, "r") as stream:
orbit = swot_simulator.orbit_propagator.calculate_orbit(parameters, stream) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Iterate on the half-orbits of a period.To iterate over all the half-orbits of a period, call the method [iterate](https://swot-simulator.readthedocs.io/en/latest/generated/swot_simulator.orbit_propagator.Orbit.iterate.htmlswot_simulator.orbit_propagator.Orbit.iterate). This method returns all cycle numbers, trace numbers, and start dates of the half orbits within the period. If the start date remains not set, the method uses the current date. If the end date remains undefined, the method sets the end date to the start date plus the cycle duration.In our case, we generate a cycle from January 1, 2000. | first_date = numpy.datetime64("2000-01-01")
iterator = orbit.iterate(first_date)
cycle_number, pass_number, date = next(iterator)
cycle_number, pass_number, date | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Initialization of measurement error generatorsError initialization is done simply by calling the appropriate [class](https://swot-simulator.readthedocs.io/en/latest/generated/swot_simulator.error.generator.Generator.htmlswot_simulator.error.generator.Generator). The initialization of the wet troposphere error generator takes a little time (about 40 seconds), which explains the processing time for the next cell. | import swot_simulator.error.generator
error_generator = swot_simulator.error.generator.Generator(
parameters, first_date, orbit.orbit_duration()) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Generate the positions under the swath.To perform this task, the following function is implemented.> If the position of the pass is outside the area of interest (`parameters.area`),> the generation of the pass can return `None`. | def generate_one_track(pass_number, date, orbit):
# Compute the spatial/temporal position of the satellite
track = swot_simulator.orbit_propagator.calculate_pass(
pass_number, orbit, parameters)
# If the pass is not located in the area of interest (parameter.area)
# the result of the generation can be null.
if track is None:
return None
# Set the simulated date
track.set_simulated_date(date)
return track | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Interpolate SSHInterpolation of the SSH for the space-time coordinates generated by the simulator. | def interpolate_ssh(parameters, track):
swath_time = numpy.repeat(track.time, track.lon.shape[1]).reshape(track.lon.shape)
ssh = parameters.ssh_plugin.interpolate(track.lon.flatten(),
track.lat.flatten(),
swath_time.flatten())
return ssh.reshape(track.lon.shape) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Calculation of instrumental errorsSimulation of instrumental errors. > Karin's instrumental noise can be modulated by wave heights.> The parameter SWH takes either a constant or a matrix defining> the SWH for the swath positions. | def generate_instrumental_errors(error_generator, cycle_number, pass_number,
orbit, track):
return error_generator.generate(cycle_number,
pass_number,
orbit.curvilinear_distance,
track.time,
track.x_al,
track.x_ac,
swh=2.0) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Calculates the sum of the simulated errors. | def sum_error(errors, swath=True):
"""Calculate the sum of errors"""
dims = 2 if swath else 1
return numpy.add.reduce(
[item for item in errors.values() if len(item.shape) == dims]) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Create the swath dataset Generation of the simulated swath. The function returns an xarray dataset for the half-orbit generated. | import swot_simulator.netcdf
def generate_dataset(cycle_number,
pass_number,
track,
ssh,
noise_errors,
complete_product=False):
product = swot_simulator.netcdf.Swath(track, central_pixel=True)
# Mask to set the measurements outside the requirements of the mission to
# NaN.
mask = track.mask()
ssh *= mask
product.ssh(ssh + sum_error(noise_errors))
product.simulated_true_ssh(ssh)
for error in noise_errors.values():
# Only the swaths must be masked
if len(error.shape) == 2:
error *= mask
product.update_noise_errors(noise_errors)
return product.to_xarray(cycle_number, pass_number, complete_product) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Swath generation.Now we can combine the different components to generate the swath. | import dask
import dask.distributed
def generate_swath(cycle_number, pass_number, date, parameters,
error_generator, orbit):
client = dask.distributed.get_client()
# Scatter big data
orbit_ = client.scatter(orbit)
error_generator_ = client.scatter(error_generator)
# Compute swath positions
track = dask.delayed(generate_one_track)(pass_number, date, orbit_)
# Interpolate SSH
ssh = dask.delayed(interpolate_ssh)(parameters, track)
# Simulate instrumental errors
noise_errors = dask.delayed(generate_instrumental_errors)(error_generator_,
cycle_number,
pass_number,
orbit_, track)
# Finally generate the dataset
return dask.delayed(generate_dataset)(
cycle_number, pass_number, track, ssh, noise_errors,
parameters.complete_product).compute() | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
The simulator calculation can be distributed on a Dask cluster. | import dask.distributed
# A local cluster is used here.
cluster = dask.distributed.LocalCluster()
client = dask.distributed.Client(cluster)
client
error_generator_ = client.scatter(error_generator)
parameters_ = client.scatter(parameters)
orbit_ = client.scatter(orbit)
future = client.submit(generate_swath, cycle_number, pass_number, date,
parameters_, error_generator_, orbit_)
ds = client.gather(future)
ds | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
To calculate a trace set you can use the following code futures = [] for cycle_number, pass_number, date in iterator: futures.append(client.submit(generate_swath, cycle_number, pass_number, date, error_generator_, orbit_, parameters)) client.gather(futures) Visualization | import matplotlib.pyplot
import cartopy.crs
import cartopy.feature
%matplotlib inline | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Selection of a reduced geographical area for visualization. | selected = ds.where((ds.latitude > -50) & (ds.latitude < -40), drop=True) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Simulated SSH measurements (Interpolated SSH and simulated instrumental errors). | fig = matplotlib.pyplot.figure(figsize=(24, 12))
ax = fig.add_subplot(1, 1, 1, projection=cartopy.crs.PlateCarree())
contourf = ax.contourf(selected.longitude,
selected.latitude,
selected.ssh_karin,
transform=cartopy.crs.PlateCarree(),
levels=255,
cmap='jet')
fig.colorbar(contourf, orientation='vertical')
ax.set_extent([60, 69, -50, -40], crs=cartopy.crs.PlateCarree())
ax.gridlines(draw_labels=True, dms=True, x_inline=False, y_inline=False)
ax.coastlines() | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Simulated KaRIN instrumental noise. | for item in selected.variables:
if item.startswith("simulated_error"):
variable = selected.variables[item]
fig = matplotlib.pyplot.figure(figsize=(18, 8))
ax = fig.add_subplot(1, 1, 1)
image = ax.imshow(variable.T,
extent=[0, len(selected.num_lines), -70, 70],
cmap='jet')
ax.set_title(variable.attrs['long_name'] + "(" +
variable.attrs['units'] + ")")
ax.set_xlabel("num_lines")
ax.set_ylabel("num_pixels")
fig.colorbar(image,
orientation='vertical',
fraction=0.046 * 70 / 250,
pad=0.04) | _____no_output_____ | BSD-3-Clause | notebooks/swath_generation.ipynb | CNES/swot_simulator |
Research computing with Python------ Workflow of research computing * Literature review* Ideas - Hypothesis formulation - Methodology* Data generation - Simulation - coding by yourself or use existing models (such as [LAMMPS](http://lammps.sandia.gov/doc/Manual.html) and [OpenFOAM](https://openfoam.org/)) - Experiment control * Data analysis - Tools for data post-processing - Implementation of algorithm - Visualisation * Publishing your results - Manuscripts - Figures/tables/animations - Your research data and source codes might be required by reviewers so that they can reproduce and validate your published results------ Requirements in computational-based research_"Just make it work"_ is NOT enough.* Minimum standard: __reproducibility__ - both the author and other researchers are able to rerun the simulations and reproduce the same results using the data and computer code that are used to generate the published results by the author* Ultimate standard: __replicability__ - research findings/claims that invlove numerical simulations are able to be replicated by other researchers using independent methods and data (e.g. can your simulation results be validated against that of experiments?) Therefore reproducible may _NOT_ be replicible. See: [Peng, 2011. Reproducible Research in Computational Science. DOI: 10.1126/science.1213847](http://science.sciencemag.org/content/334/6060/1226.full) ------ Why PythonPython is a modern, fully-featured, general-purpose, high-level interpreted language, and supports multiple programming paradigms, e.g. object-oriented and functional programming.* Easy to learn and quick to program in - more time on scientific thinking, less time on programming. Note that our main work is research.* Dynamically-typed and automatic memory management - No need to define types for variables and functions. No need to manually allocate and deallocate memory for data arrays * Expressive - not only readable, but also concise | persons = [name for name in ["Adam", "James", "Dan", "Smith"] if name.startswith("A") or name.endswith("s")]
print(persons) | ['Adam', 'James']
| Unlicense | L01_Research_computing_with_Python.ipynb | Olaolutosin/phythosin |
Read IRSM FORM | cms_xml = xml_parser.get_files('irsmform xml', folder = 'linear TSR logs')
cms_xml_out = xml_parser.get_files('out xml', folder = 'linear TSR logs')
cms_csv = xml_parser.get_files('CMS 10y csv', folder = 'linear TSR logs')
cms_replic_basket = csv_parser.parse_csv(cms_csv)
cal_basket = list(xml_parser.get_calib_basket(cms_xml_out))
main_curve, sprds = xml_parser.get_rate_curves(cms_xml)
dsc_curve = main_curve
try:
estim_curve = sprds[0]
except TypeError:
estim_curve = main_curve | _____no_output_____ | MIT | notebooks/gold standard/.ipynb_checkpoints/2. recon_tsr_strikes_normalVol-checkpoint.ipynb | nkapchenko/TSR |
Kmin & Kmax reconciliation | maxe = 0
tsr_fwds, xml_fwds = [], []
for (caplet, floorlet, swaplet), swo in zip(cms_replic_basket, cal_basket):
tsr_fwds.append(caplet.fwd)
xml_fwds.append(swo.get_swap_rate(dsc_curve, estim_curve))
kmax = tsr.minmax_strikes(swo.vol, swo.expiry, swo.get_swap_rate(dsc_curve, estim_curve), caplet.n).kmax
kmin = tsr.minmax_strikes(swo.vol, swo.expiry, swo.get_swap_rate(dsc_curve, estim_curve), floorlet.n).kmin
maxe = max(maxe, kmax - caplet.strike_max)
# print('Tf', int(caplet.fixing_date),'kmax diff: ',kmax - caplet.strike_max,'kmin diff: ',kmin - floorlet.strike_min)
print(colored('hello', 'red') if maxe > 1e-09 else colored('OK', 'green'))
array(xml_fwds) - array(tsr_fwds)
print(colored('hello', 'red') if max(array(xml_fwds) - array(tsr_fwds)) > 1e-09 else colored('OK', 'green')) | [32mOK[0m
| MIT | notebooks/gold standard/.ipynb_checkpoints/2. recon_tsr_strikes_normalVol-checkpoint.ipynb | nkapchenko/TSR |
Strike ladder recon | swo = cal_basket[-1]
caplet, floorlet, swaplet = cms_replic_basket[-1]
fwd = swo.get_swap_rate(dsc_curve, estim_curve)
nref = caplet.n
minmax_strikes = tsr.minmax_strikes(swo.vol, swo.expiry, fwd, nref)
neff_capl = math.ceil((minmax_strikes.kmax - minmax_strikes.fwd)/minmax_strikes.kstep) + 1
neff_floo = math.floor((minmax_strikes.fwd - minmax_strikes.kmin)/minmax_strikes.kstep) + 1
strikes_ladders = tsr.build_strike_ladders(minmax_strikes, neff_capl, neff_floo)
# print(caplet.calib_basket.Strike.values - strikes_ladders.caplet_ladder)
max(caplet.calib_basket.Strike.values - strikes_ladders.caplet_ladder)
print(colored('hello', 'red') if max(caplet.calib_basket.Strike.values - strikes_ladders.caplet_ladder) > 1e-09 else\
colored('OK', 'green'))
# print(floorlet.calib_basket.Strike.values - strikes_ladders.floorlet_ladder)
print(colored('hello', 'red') if max(floorlet.calib_basket.Strike.values - strikes_ladders.floorlet_ladder) > 1e-09\
else colored('OK', 'green')) | [32mOK[0m
| MIT | notebooks/gold standard/.ipynb_checkpoints/2. recon_tsr_strikes_normalVol-checkpoint.ipynb | nkapchenko/TSR |
Weights recon | mr = xml_parser.get_tsr_params(cms_xml_out).meanRevTSRSwapRate(caplet.fixing_date)
tsr_coeff = linear.get_coeff(caplet.pmnt_date, dsc_curve, swo, mr, estim_curve)
tsr_weights = tsr.build_weights(minmax_strikes, neff_capl, neff_floo, tsr_coeff)
# print(caplet.calib_basket.Weights.values - tsr_weights.capletWeights)
print(colored('ALARM', 'red') if max(abs(caplet.calib_basket.Weights.values - tsr_weights.capletWeights)) > 3e-09\
else colored('OK', 'green'))
# print(floorlet.calib_basket.Weights.values - tsr_weights.floorletWeights)
print(colored('ALARM', 'red') if max(abs(floorlet.calib_basket.Weights.values - tsr_weights.floorletWeights)) > 3e-09\
else colored('OK', 'green')) | [32mOK[0m
| MIT | notebooks/gold standard/.ipynb_checkpoints/2. recon_tsr_strikes_normalVol-checkpoint.ipynb | nkapchenko/TSR |
Disc(Tf, Tp) / Annuity | print(caplet.calib_basket['Disc/A'].values - tsr.get_DiscOverAnnuity(strikes_ladders.caplet_ladder, tsr_coeff))
print(colored('ALARM', 'red') if max(abs(caplet.calib_basket['Disc/A'].values -\
tsr.get_DiscOverAnnuity(strikes_ladders.caplet_ladder, tsr_coeff))) > 3e-09 else colored('OK', 'green'))
# print(floorlet.calib_basket['Disc/A'].values - tsr.get_DiscOverAnnuity(strikes_ladders.floorlet_ladder, tsr_coeff))
print(colored('ALARM', 'red') if max(abs(floorlet.calib_basket['Disc/A'].values - \
tsr.get_DiscOverAnnuity(strikes_ladders.floorlet_ladder, tsr_coeff))) > 4e-09 else colored('OK', 'green'))
myBachelierCaplet = array([volatility.BachelierPrice(F=swo.get_swap_rate(dsc_curve, estim_curve),
K=strike,
v=swo.vol.value*np.sqrt(swo.expiry))
* swo.get_annuity(dsc_curve) / dsc_curve.get_dsc(swo.start_date)
for strike in strikes_ladders.caplet_ladder])
plt.plot(myBachelierCaplet, label='Bachelier')
plt.plot(caplet.calib_basket.SwoPrice.values, label='Swaption', ls='--')
plt.legend()
max(myBachelierCaplet - caplet.calib_basket.SwoPrice.values)
print(colored('ALARM', 'red') if max(myBachelierCaplet - caplet.calib_basket.SwoPrice.values) > 2e-05 else colored('OK e-05', 'green'))
(myBachelierCaplet - caplet.calib_basket.SwoPrice.values)
myBachelierFloorlet = array([volatility.BachelierPrice(
F=swo.get_swap_rate(dsc_curve, estim_curve),
K=strike,
v=swo.vol.value*np.sqrt(swo.expiry), w=-1)
* swo.get_annuity(dsc_curve) / dsc_curve.get_dsc(swo.start_date)
for strike in strikes_ladders.floorlet_ladder])
plt.plot(myBachelierFloorlet, label='Bachelier')
plt.plot(floorlet.calib_basket.SwoPrice.values, label='Swaption')
plt.legend()
print(colored('ALARM', 'red') if max(myBachelierFloorlet - floorlet.calib_basket.SwoPrice.values) > 2e-05 else colored('OK e-05', 'green')) | [32mOK e-05[0m
| MIT | notebooks/gold standard/.ipynb_checkpoints/2. recon_tsr_strikes_normalVol-checkpoint.ipynb | nkapchenko/TSR |
Cyclical figurate numbersProblem 61https://projecteuler.net/minimal=61 Triangle, square, pentagonal, hexagonal, heptagonal, and octagonal numbers are all figurate (polygonal) numbers and are generated by the following formulae:Triangle P3,n=n(n+1)/2 1, 3, 6, 10, 15, ...Square P4,n=n2 1, 4, 9, 16, 25, ...Pentagonal P5,n=n(3n−1)/2 1, 5, 12, 22, 35, ...Hexagonal P6,n=n(2n−1) 1, 6, 15, 28, 45, ...Heptagonal P7,n=n(5n−3)/2 1, 7, 18, 34, 55, ...Octagonal P8,n=n(3n−2) 1, 8, 21, 40, 65, ...The ordered set of three 4-digit numbers: 8128, 2882, 8281, has three interesting properties.The set is cyclic, in that the last two digits of each number is the first two digits of the next number (including the last number with the first).Each polygonal type: triangle (P3,127=8128), square (P4,91=8281), and pentagonal (P5,44=2882), is represented by a different number in the set.This is the only set of 4-digit numbers with this property.Find the sum of the only ordered set of six cyclic 4-digit numbers for which each polygonal type: triangle, square, pentagonal, hexagonal, heptagonal, and octagonal, is represented by a different number in the set. | triangle = lambda n: int( (n*(n+1))/2 )
square = lambda n: int( n**2 )
pentagonal = lambda n: int( (n*(3*n-1))/2 )
hexagonal = lambda n: ( n*(2*n-1) )
heptagonal = lambda n: int( (n*(5*n-3))/2 )
octagonal = lambda n: int( n*(3*n-2) )
# Tests
assert [triangle(x) for x in range(1, 6)] == [1,3,6,10,15]
assert [square(x) for x in range(1, 6)] == [1,4,9,16,25]
assert [pentagonal(x) for x in range(1, 6)] == [1,5,12,22,35]
assert [hexagonal(x) for x in range(1, 6)] == [1,6,15,28,45]
assert [heptagonal(x) for x in range(1, 6)] == [1,7,18,34,55]
assert [octagonal(x) for x in range(1, 6)] == [1,8,21,40,65]
def get_q_r(x):
return (x // 100, x % 100)
def generate_polygonal_numbers(func):
"""
Returns list of numbers between 1010 and 9999 given a function.
"""
result = []
for n in range(1, 150):
p = func(n)
q, r = get_q_r(p)
if (1010 <= p <= 9999) & (r >= 10):
result.append(p)
if p >= 9999:
break
return result
triangle_nums = generate_polygonal_numbers(triangle)
square_nums = generate_polygonal_numbers(square)
pentagonal_nums = generate_polygonal_numbers(pentagonal)
hexagonal_nums = generate_polygonal_numbers(hexagonal)
heptagonal_nums = generate_polygonal_numbers(heptagonal)
octagonal_nums = generate_polygonal_numbers(octagonal)
polygonal_sets = [triangle_nums, square_nums, pentagonal_nums, hexagonal_nums, heptagonal_nums, octagonal_nums]
for p3 in triangle_nums:
for p4 in square_nums:
for p5 in pentagonal_nums:
if len(set((p3, p4, p5))) < 3:
continue
q3, r3 = get_q_r(p3)
q4, r4 = get_q_r(p4)
q5, r5 = get_q_r(p5)
if (r3 == q4) & (r4 == q5) & (r5 == q3):
print(p3, p4, p5)
elif (r3 == q5) & (r5 == q4) & (r4 == q3):
print(p3, p4, p5)
1024**0.5 | _____no_output_____ | MIT | project_euler/061_Cyclical figurate numbers_incomplete.ipynb | Sabihxh/projectEuler |
Analyse des décès en France (toutes causes confondues)Ce notebook a pour but de mettre les chiffres de décès de ces dernières années en France en perspective afin d'en tirer des conclusions éclairées. | # Bibliothèque d'analyse et de traitement de données
import pandas as pd
from pandas.api.types import CategoricalDtype
print("Pandas version :", pd.__version__)
import numpy as np
from datetime import datetime
# Bibliothèques de gestion des graphiques
import matplotlib.pyplot as plt
# Bibliothèques de gestion des fichiers
import glob
import os.path
# Bibliothèque de gestion du format csv pour écrire les fichiers cache
import csv
# "Constantes" ou variables globales
csv_delimiter = ';'
print ('Imports des bibliothèques terminés !') | Pandas version : 1.1.3
Imports des bibliothèques terminés !
| MIT | stats-deces.ipynb | SteamFred/data-analysis |
Données sourcesLes données sources sont issues du __[Fichier des personnes décédées](https://www.data.gouv.fr/fr/datasets/fichier-des-personnes-decedees/)__ fournies en données ouvertes sur [data.gouv.fr](https://www.data.gouv.fr/fr/).Les fichiers `deces-*.txt` doivent être téléchargés dans le dossier de travail du notebook.Pour un traitement rapide, commencer par un petit nombre de fichiers, ex: 2002 et 2003.**note** : Certains fichiers portent parfois quelques "caractères non imprimables" qui font échouer sa lecture et son interprétation. Dans ce cas, il faut l'éditer pour rechercher et retirer ces caractères. Je ne sais pas si c'est un problème du fichier d'origine ou si c'est dû à un parasitage du téléchargement, je n'ai pas investigué. Analyse et extractionPour accélérer le travail, nous alons lire les fichiers `deces-*.txt` qui ont un format "en largeur fixe" et en extraire les informations essentielles suivantes : - date de décès - date de naissance Ces informations seront écrites dans un fichier cache au format CSV qui permettra plusieurs choses : 1. Accélérer une éventuelle relecture des données sources : au lieu de lire pour chaque année un fichier de plus de 100 Mo, on ne lit que quelques Mo. 1. Manipuler ces données dans un tableur indépendamment de cet exercice à des fins de recoupement, par exemple. 1. Porter directement l'âge du décès par soustraction plutôt que la date de naissance pour éviter de refaire le calcul à chaque utilisation de la donnée. Pour que le code soit compréhensible, je laisse un certain nombre de variables intermédiaires. | # Lister tous les fichiers qui commencent par "deces" et sont au format texte (finissent par ".txt")
sourcelist = [f for f in glob.glob("deces*.txt")]
# Pour chaque fichier de cette liste...
for source in sourcelist:
# on va créer un pendant au même nom, mais se terminant par .csv
cachefile = source[0:-4] + '.csv'
# Par contre, on ne crée ce fichier que s'il n'existe pas ou s'il est plus vieux que le fichier de données
if not(os.path.exists(cachefile)) or os.path.getmtime(source) > os.path.getmtime(cachefile):
if os.path.exists(cachefile):
os.remove(cachefile)
with open(cachefile, 'w', newline='') as cache:
cache_writer = csv.writer(cache, delimiter = csv_delimiter, quotechar='"', quoting=csv.QUOTE_MINIMAL)
print ('Génération du cache CSV pour le fichier : ' + source)
# On crée une première ligne portant le nom des 2 colonnes
cache_writer.writerow(['date', 'age'])
# On ouvre ensuite le fichier source pour en lire toutes ses lignes en gérant les caractères accentués
with open(source,'r', errors='replace') as f:
lines = f.readlines()
# Pour chaque ligne du fichier source...
for l in lines:
# ...on interprète la date de décès aux caractères 154 à 161 inclus (donc 162 exclus)
txt_date_d = l[154:162]
try:
date_d = datetime.strptime(txt_date_d, "%Y%m%d")
except ValueError:
# Si la date n'est pas interprétable, alors on met zero
date_d = 0
continue
# ...on interprète la date de naissance aux caractères 81 à 88
try:
date_n = datetime.strptime(l[81:89], "%Y%m%d")
except ValueError:
# Si la date n'est pas interprétable, alors on met zero
date_n = 0
continue
# Ici, on peut choisir de filtrer.
# Par exemple, on peut choisir de ne conserver que les années 20xx et 199x
# Mais surtout, on ignore les dates non reconnues
if date_d != 0 and date_n != 0 and (txt_date_d[0:2] == '20' or txt_date_d[0:3] == '199'):
# Puis on calcule l'âge qu'on écrit avec la date de décès dans le fichier csv
age = date_d.year - date_n.year - ((date_d.month, date_d.day) < (date_n.month, date_n.day))
cache_writer.writerow([txt_date_d[0:4]+'/'+txt_date_d[4:6]+'/'+txt_date_d[6:8],age])
print ('(Ré)génération des caches CSV terminée !') | (Ré)génération des caches CSV terminée !
| MIT | stats-deces.ipynb | SteamFred/data-analysis |
Relecture du cacheCi-après, on lit tous les fichiers cache pour les concaténer en mémoire afin de faire notre analyse.Pour réinitialiser toutes les données de la "dataframe" df, c'est cette cellule qu'il faut réexécuter. | print ('Lecture des fichiers cache...')
# Créer une liste de tous les csv
chunks = []
cachelist = [f for f in glob.glob("deces*.csv")]
for cache in cachelist:
# print ('Lecture de :', cache)
chunks.append(pd.read_csv(cache,
sep=';',
encoding='latin_1', # Spécifie l'encodage du fichier à lire
dtype={'age':np.int8}, # Stocke l'age dans un format court pour la performance
index_col=['date'], # Crée un indexe sur la colonne date
parse_dates=True)) # Interprète automatiquement ce qui ressemble à une date
# Concatène le tout
df = pd.concat(chunks, axis=0, ignore_index=False)
# Affiche les premières lignes pour vérifier que tout s'est bien passé
df.tail()
# Exemples d'utilisation :
# * Sélection de tous les décès à moins de 50 ans : df[df.age < 50]
# * Sélection de tous les décès du mois de février 2020 : df.loc['2020-02'] | Lecture des fichiers cache...
| MIT | stats-deces.ipynb | SteamFred/data-analysis |
ClassificationPour se simplifier l'interprétation graphique, on va classer chaque décès dans une catégorie d'âge.J'ai découpé en tranches plus ou moins fines... | df['classe'] = pd.cut(df.age, bins=[0, 18, 25, 45, 55, 65, 75, 85, 150],
labels=['mineur', '18-25 ans', '26-45 ans', '46-55 ans', '56-65 ans', '66-75 ans', '76-85 ans', 'plus de 86 ans'])
df.tail() | _____no_output_____ | MIT | stats-deces.ipynb | SteamFred/data-analysis |
Nombre de décès par date par catégorieA partir de la dataframe contenant les catégories, on crée une matrice pour compter le nombre de décès par date par catégorie. | ages_df = df.groupby(['date', 'classe']).age.count().astype(np.int16).unstack()
ages_df.tail() | _____no_output_____ | MIT | stats-deces.ipynb | SteamFred/data-analysis |
Analyses graphiques Analyse graphique par classe d'âge durant le confinement en FranceAfin de rendre la courbe plus lisible (la débruiter), on utilise une fonction de lissage ewm moyenné avec un facteur alpha assez faible.Analysons la période du confinement élargie de mars à mai.Constate-t-on que pour la population au dessous de 65 ans, ne nombre de décès n'a pas varié de façon notable ? | # Pour supprimer le lissage, mettre alpha=1
# _= est une petite astuce de Romain WaaY pour ne pas afficher de message parasite avec le graphique
_=ages_df.ewm(alpha=0.2).mean().loc['2020-03':'2020-05'].plot(figsize=(18,10), legend='reverse') | _____no_output_____ | MIT | stats-deces.ipynb | SteamFred/data-analysis |
Analyse graphique sur les 20 dernières annéesMettons maintenant en perspective la surmortalité du printemps 2020 par rapport aux 20 dernières années, toutes classes d'âge confondues.D'une manière générale, on constate des pics de décès de plus en plus hauts tous les 2 à 4 ans.Le pic de 2020 est élevé, mais semble rester dans la tendance haussière générale... | _=df.loc['2000-09':'2020-11-25'].classe.groupby('date').count().plot(figsize=(18,8))
# Calcul des moyennes annuelles
an_debut = 2001
an_fin = 2021
moyennes = pd.Series([int(ages_df.loc[str(an)].mean().sum()) for an in range(an_debut,an_fin)],
index=[datetime.strptime(str(an), "%Y") for an in range(an_debut,an_fin)])
moyennes.plot()
#Todo: superposer les courbes
#plt.figure(figsize=(18,8))
#df.loc['2009-09'].classe.groupby('date').count().plot()
#df.loc['2010-09'].classe.groupby('date').count().plot()
ddf = pd.DataFrame(df.classe.groupby('date').count())
ddf.tail()
#Todo: superposer les courbes
plt.figure(figsize=(18,8))
#df.loc['2019-09-01':'2019-10-26'].classe.groupby('date').count().ewm(alpha=0.2).mean().plot()
df.loc['2019-10-01':].classe.groupby('date').count().ewm(alpha=0.2).mean().plot()
ddf['annee'] = ddf.index.year
ddf['doy'] = ddf.index.dayofyear
ddf.tail()
ddf.reset_index(inplace=True)
ddf.set_index(['annee', 'doy'], inplace=True)
ddf.loc[2020].classe.plot(x=ddf.date) | _____no_output_____ | MIT | stats-deces.ipynb | SteamFred/data-analysis |
INSEE.fr : [Nombre de décès quotidiensFrance, régions et départements](https://www.insee.fr/fr/statistiques/4487988?sommaire=4487854), Fichier individuel comportant des informations sur chaque décès - 20 novembre 2020 | ds2020 = pd.read_csv('DC_20202021_det.csv', sep=';', encoding='utf-8')
ds2020 = ds2020.drop(['COMDEC', 'DEPDEC', 'SEXE', 'COMDOM', 'LIEUDEC2'], axis=1)
nb_lignes_brut = ds2020.shape[0]
ds2020 = ds2020.dropna(axis=0)
print ("Nombre de lignes incomplètes retirées :", str(nb_lignes_brut - ds2020.shape[0]))
ds2020.tail()
ds2020['MNAIS'] = ds2020['MNAIS'].astype('int64')
ds2020['JNAIS'] = ds2020['JNAIS'].astype('int64')
ds2020['age'] = ds2020['ADEC'] - ds2020['ANAIS'] - (ds2020['MDEC'] < ds2020['MNAIS'])
ds2020.drop(['ANAIS', 'MNAIS', 'JNAIS'], axis=1, inplace=True)
ds2020.tail()
ds2020['classe'] = pd.cut(ds2020.age, bins=[0, 18, 25, 45, 55, 65, 75, 85, 150],
labels=['mineur', '18-25 ans', '26-45 ans', '46-55 ans', '56-65 ans', '66-75 ans', '76-85 ans', 'plus de 86 ans'])
#ds2020['date'] = pd.to_datetime(str(ds2020['ADEC'])+'-'+str(ds2020['MDEC'])+'-'+str(ds2020['JDEC']), format='%y-%m-%d')
ds2020.rename(columns={"ADEC": "year", "MDEC": "month", "JDEC": "day"}, inplace = True)
ds2020['date'] = pd.to_datetime(ds2020.loc[:,['year', 'month', 'day']])
ds2020.tail()
ds2020.drop(['month', 'day', 'age'], axis=1, inplace = True)
ds2020.rename(columns={"year": "annee"}, inplace = True)
ds2020.set_index('date', inplace=True)
ds2020.tail()
ages_ds = ds2020.groupby(['date', 'classe']).annee.count().astype(np.int16).unstack()
ages_ds.tail()
_=ages_ds.ewm(alpha=0.1).mean().loc['2020-09':].plot(figsize=(18,10), legend='reverse')
dds = pd.DataFrame(ds2020.classe.groupby('date').count())
dds.tail()
dds['annee'] = dds.index.year
dds['doy'] = dds.index.dayofyear
dds.reset_index(inplace=True)
dds.set_index(['annee', 'doy'], inplace=True)
dds.tail() | _____no_output_____ | MIT | stats-deces.ipynb | SteamFred/data-analysis |
Sources des données hospitalières : https://www.data.gouv.fr/fr/datasets/donnees-hospitalieres-relatives-a-lepidemie-de-covid-19/ | dcc = pd.read_csv('donnees-hospitalieres-classe-age-covid19-2021-01-09-19h03.csv', sep=';', encoding='utf-8',
index_col=['jour'], # Crée un indexe sur la colonne date
parse_dates=True)
dcc = dcc.drop(['reg', 'hosp', 'rea', 'rad'], axis=1)
nb_lignes_brut = dcc.shape[0]
dcc = dcc.dropna(axis=0)
print ("Nombre de lignes incomplètes retirées :", str(nb_lignes_brut - dcc.shape[0]))
dcc.tail(20)
ages_dcc = dcc.groupby(['jour', 'cl_age90'])['dc'].sum().unstack()
#.drop([0], axis=1)
ages_dcc.tail(20)
total_dcc = pd.DataFrame(ages_dcc[0])
idx = pd.IndexSlice
total_dcc['delta'] = total_dcc[0]
delta = 0
y_prev = 0
x_prev = 0
for x, y in total_dcc.iterrows():
if x_prev != 0:
delta = y[0] - y_prev
total_dcc.loc[idx[x]]['delta'] = delta
else:
total_dcc.loc[idx[x]]['delta'] = y[0]
y_prev = y[0]
x_prev = x
total_dcc.head()
_=total_dcc['2020']['delta'].ewm(alpha=0.3).mean().plot(figsize=(18,10))
# _=total_dcc['2021']['delta'].ewm(alpha=0.3).mean().plot(figsize=(18,10))
total_dcc['annee'] = total_dcc.index.year
total_dcc['doy'] = total_dcc.index.dayofyear
total_dcc.reset_index(inplace=True)
total_dcc.set_index(['annee', 'doy'], inplace=True)
total_dcc.tail()
_=ages_dcc.ewm(alpha=0.3).mean().plot(figsize=(18,10), legend='reverse')
#Essayons de mettre des légendes sur le graphe...
fig, ax = plt.subplots(figsize=(18,10))
ax.set_ylim(bottom=0, top=3000)
doy = range(1,367)
c2017 = ax.plot(ddf.loc[2017].classe)
c2018 = ax.plot(ddf.loc[2018].classe)
c2019 = ax.plot(ddf.loc[2019].classe)
c2020 = ax.plot(dds.loc[2020].classe)
c2021 = ax.plot(dds.loc[2021].classe)
d2020 = ax.plot(total_dcc.loc[2020].delta)
#ax.legend((c2019, c2020), ('2019', '2020'), loc='upper right')
ax.set_xlabel('jour de l''année')
ax.set_ylabel('nombre de décès par jour')
ax.set_title('Courbes annuelles de décès en France')
total_dcc['surmortalite'] = dds.loc[2020].classe - total_dcc.loc[2020].delta
total_dcc['surmortalite'] | _____no_output_____ | MIT | stats-deces.ipynb | SteamFred/data-analysis |
The following additional libraries are needed to run thisnotebook. Note that running on Colab is experimental, please report a Githubissue if you have any problem. | !pip install git+https://github.com/d2l-ai/d2l-zh@release # installing d2l
| _____no_output_____ | MIT | Notebook/7/7.3.NiN.ipynb | zihan987/d2l-PaddlePaddle |
7.3. 网络中的网络(NiN):label:`sec_nin`LeNet、AlexNet 和 VGG 都有一个共同的设计模式:通过一系列的卷积层与池化层来提取空间结构特征;然后通过全连接层对特征的表征进行处理。AlexNet 和 VGG 对 LeNet 的改进主要在于如何扩大和加深这两个模块。或者,可以想象在这个过程的早期使用全连接层。然而,如果使用稠密层了,可能会完全放弃表征的空间结构。*网络中的网络* (*NiN*) 提供了一个非常简单的解决方案:在每个像素的通道上分别使用多层感知机 :cite:`Lin.Chen.Yan.2013` (**7.3.1. NiN块**)回想一下,卷积层的输入和输出由四维张量组成,张量的每个轴分别对应样本、通道、高度和宽度。另外,全连接层的输入和输出通常是分别对应于样本和特征的二维张量。NiN 的想法是在每个像素位置(针对每个高度和宽度)应用一个全连接层。如果我们将权重连接到每个空间位置,我们可以将其视为 $1\times 1$ 卷积层(如 :numref:`sec_channels` 中所述),或作为在每个像素位置上独立作用的全连接层。从另一个角度看,即将空间维度中的每个像素视为单个样本,将通道维度视为不同特征(feature)。:numref:`fig_nin` 说明了 VGG 和 NiN 及它们的块之间主要结构差异。NiN 块以一个普通卷积层开始,后面是两个 $1\times 1$ 的卷积层。这两个$1\times 1$ 卷积层充当带有 ReLU 激活函数的逐像素全连接层。第一层的卷积窗口形状通常由用户设置。随后的卷积窗口形状固定为 $1 \times 1$。:width:`600px`:label:`fig_nin` | import paddle
import paddle.nn as nn
import numpy as np
class Nin(nn.Layer):
def __init__(self, num_channels, num_filters, kernel_size, strides, padding):
super(Nin, self).__init__()
model = [
nn.Conv2D(num_channels, num_filters, kernel_size, stride=strides, padding=padding),
nn.ReLU(),
nn.Conv2D(num_filters, num_filters, 1),
nn.ReLU(),
nn.Conv2D(num_filters, num_filters, 1),
nn.ReLU()
]
self.model = nn.Sequential(*model)
def forward(self, X):
return self.model(X) | _____no_output_____ | MIT | Notebook/7/7.3.NiN.ipynb | zihan987/d2l-PaddlePaddle |
[**7.3.2. NiN模型**]最初的 NiN 网络是在 AlexNet 后不久提出的,显然从中得到了一些启示。NiN使用窗口形状为 $11\times 11$、$5\times 5$ 和 $3\times 3$的卷积层,输出通道数量与 AlexNet 中的相同。每个 NiN 块后有一个最大池化层,池化窗口形状为 $3\times 3$,步幅为 2。NiN 和 AlexNet 之间的一个显著区别是 NiN 完全取消了全连接层。相反,NiN 使用一个 NiN块,其输出通道数等于标签类别的数量。最后放一个 *全局平均池化层*(global average pooling layer),生成一个多元逻辑向量(logits)。NiN 设计的一个优点是,它显著减少了模型所需参数的数量。然而,在实践中,这种设计有时会增加训练模型的时间。 | class Net(nn.Layer):
def __init__(self, num_channels, class_dim):
super(Net, self).__init__()
model = [
Nin(num_channels, 96, 11, strides=4, padding=0),
nn.MaxPool2D(kernel_size=3, stride=2),
Nin(96, 256, 5, strides=1, padding=2),
nn.MaxPool2D(kernel_size=3, stride=2),
# Nin(256, 384, 3, strides=1, padding=1),
# nn.MaxPool2D(kernel_size=3, stride=2),
nn.Dropout(),
# 标签类别数是10
# Nin(384, 10, 3, strides=1, padding=1),
Nin(256, 10, 3, strides=1, padding=1),
paddle.fluid.dygraph.Pool2D(pool_type='max', global_pooling=True)
]
self.model = nn.Sequential(*model)
def forward(self, X):
Y = self.model(X)
Y = paddle.flatten(Y, start_axis=1)
return Y
with paddle.fluid.dygraph.guard():
net = Net(3, 10)
X = paddle.to_tensor(np.random.uniform(-1., 1., [5, 3, 28, 28]).astype('float32'))
Y = net(X)
print(Y.shape) | [5, 10]
| MIT | Notebook/7/7.3.NiN.ipynb | zihan987/d2l-PaddlePaddle |
我们创建一个数据样本来[**查看每个块的输出形状**]。 | with paddle.fluid.dygraph.guard():
net = Net(1, 10)
param_info = paddle.summary(net, (1, 1, 28, 28))
print(param_info) | ---------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===========================================================================
Conv2D-1 [[1, 1, 28, 28]] [1, 96, 5, 5] 11,712
ReLU-1 [[1, 96, 5, 5]] [1, 96, 5, 5] 0
Conv2D-2 [[1, 96, 5, 5]] [1, 96, 5, 5] 9,312
ReLU-2 [[1, 96, 5, 5]] [1, 96, 5, 5] 0
Conv2D-3 [[1, 96, 5, 5]] [1, 96, 5, 5] 9,312
ReLU-3 [[1, 96, 5, 5]] [1, 96, 5, 5] 0
Nin-1 [[1, 1, 28, 28]] [1, 96, 5, 5] 0
MaxPool2D-1 [[1, 96, 5, 5]] [1, 96, 2, 2] 0
Conv2D-4 [[1, 96, 2, 2]] [1, 256, 2, 2] 614,656
ReLU-4 [[1, 256, 2, 2]] [1, 256, 2, 2] 0
Conv2D-5 [[1, 256, 2, 2]] [1, 256, 2, 2] 65,792
ReLU-5 [[1, 256, 2, 2]] [1, 256, 2, 2] 0
Conv2D-6 [[1, 256, 2, 2]] [1, 256, 2, 2] 65,792
ReLU-6 [[1, 256, 2, 2]] [1, 256, 2, 2] 0
Nin-2 [[1, 96, 2, 2]] [1, 256, 2, 2] 0
MaxPool2D-2 [[1, 256, 2, 2]] [1, 256, 1, 1] 0
Dropout-1 [[1, 256, 1, 1]] [1, 256, 1, 1] 0
Conv2D-7 [[1, 256, 1, 1]] [1, 10, 1, 1] 23,050
ReLU-7 [[1, 10, 1, 1]] [1, 10, 1, 1] 0
Conv2D-8 [[1, 10, 1, 1]] [1, 10, 1, 1] 110
ReLU-8 [[1, 10, 1, 1]] [1, 10, 1, 1] 0
Conv2D-9 [[1, 10, 1, 1]] [1, 10, 1, 1] 110
ReLU-9 [[1, 10, 1, 1]] [1, 10, 1, 1] 0
Nin-3 [[1, 256, 1, 1]] [1, 10, 1, 1] 0
Pool2D-1 [[1, 10, 1, 1]] [1, 10, 1, 1] 0
===========================================================================
Total params: 799,846
Trainable params: 799,846
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.19
Params size (MB): 3.05
Estimated Total Size (MB): 3.24
---------------------------------------------------------------------------
{'total_params': 799846, 'trainable_params': 799846}
| MIT | Notebook/7/7.3.NiN.ipynb | zihan987/d2l-PaddlePaddle |
[**7.3.3训练模型**]和以前一样,我们使用 Fashion-MNIST 来训练模型。训练 NiN 与训练 AlexNet、VGG时相似。 | import paddle
import paddle.vision.transforms as T
from paddle.vision.datasets import FashionMNIST
# 数据集处理
transform = T.Compose([
T.Resize(64),
T.Transpose(),
T.Normalize([127.5], [127.5]),
])
train_dataset = FashionMNIST(mode='train', transform=transform)
val_dataset = FashionMNIST(mode='test', transform=transform)
# 模型定义
model = paddle.Model(Net(1, 10))
# 设置训练模型所需的optimizer, loss, metric
model.prepare(
paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy(topk=(1, 5)))
# 启动训练、评估
model.fit(train_dataset, val_dataset, epochs=2, batch_size=64, log_freq=100) | The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/2
step 100/938 - loss: 1.7510 - acc_top1: 0.3219 - acc_top5: 0.6873 - 13ms/step
step 200/938 - loss: 1.2018 - acc_top1: 0.3822 - acc_top5: 0.7639 - 13ms/step
step 300/938 - loss: 0.8502 - acc_top1: 0.4805 - acc_top5: 0.8007 - 13ms/step
step 400/938 - loss: 0.8844 - acc_top1: 0.5520 - acc_top5: 0.8247 - 13ms/step
step 500/938 - loss: 0.7246 - acc_top1: 0.5986 - acc_top5: 0.8372 - 13ms/step
step 600/938 - loss: 0.5191 - acc_top1: 0.6321 - acc_top5: 0.8473 - 13ms/step
step 700/938 - loss: 0.6790 - acc_top1: 0.6568 - acc_top5: 0.8539 - 13ms/step
step 800/938 - loss: 0.6095 - acc_top1: 0.6762 - acc_top5: 0.8587 - 13ms/step
step 900/938 - loss: 0.6649 - acc_top1: 0.6910 - acc_top5: 0.8624 - 13ms/step
step 938/938 - loss: 0.6247 - acc_top1: 0.6953 - acc_top5: 0.8635 - 13ms/step
Eval begin...
step 100/157 - loss: 0.5220 - acc_top1: 0.8167 - acc_top5: 0.8983 - 11ms/step
| MIT | Notebook/7/7.3.NiN.ipynb | zihan987/d2l-PaddlePaddle |
Plagiarism Detection, Feature EngineeringIn this project, you will be tasked with building a plagiarism detector that examines an answer text file and performs binary classification; labeling that file as either plagiarized or not, depending on how similar that text file is to a provided, source text. Your first task will be to create some features that can then be used to train a classification model. This task will be broken down into a few discrete steps:* Clean and pre-process the data.* Define features for comparing the similarity of an answer text and a source text, and extract similarity features.* Select "good" features, by analyzing the correlations between different features.* Create train/test `.csv` files that hold the relevant features and class labels for train/test data points.In the _next_ notebook, Notebook 3, you'll use the features and `.csv` files you create in _this_ notebook to train a binary classification model in a SageMaker notebook instance.You'll be defining a few different similarity features, as outlined in [this paper](https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c412841_developing-a-corpus-of-plagiarised-short-answers/developing-a-corpus-of-plagiarised-short-answers.pdf), which should help you build a robust plagiarism detector!To complete this notebook, you'll have to complete all given exercises and answer all the questions in this notebook.> All your tasks will be clearly labeled **EXERCISE** and questions as **QUESTION**.It will be up to you to decide on the features to include in your final training and test data.--- Read in the DataThe cell below will download the necessary, project data and extract the files into the folder `data/`.This data is a slightly modified version of a dataset created by Paul Clough (Information Studies) and Mark Stevenson (Computer Science), at the University of Sheffield. You can read all about the data collection and corpus, at [their university webpage](https://ir.shef.ac.uk/cloughie/resources/plagiarism_corpus.html). > **Citation for data**: Clough, P. and Stevenson, M. Developing A Corpus of Plagiarised Short Answers, Language Resources and Evaluation: Special Issue on Plagiarism and Authorship Analysis, In Press. [Download] | # NOTE:
# you only need to run this cell if you have not yet downloaded the data
# otherwise you may skip this cell or comment it out
!wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c4147f9_data/data.zip
!unzip data
# import libraries
import pandas as pd
import numpy as np
import os | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
This plagiarism dataset is made of multiple text files; each of these files has characteristics that are is summarized in a `.csv` file named `file_information.csv`, which we can read in using `pandas`. | csv_file = 'data/file_information.csv'
plagiarism_df = pd.read_csv(csv_file)
# print out the first few rows of data info
plagiarism_df.head() | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Types of PlagiarismEach text file is associated with one **Task** (task A-E) and one **Category** of plagiarism, which you can see in the above DataFrame. Tasks, A-EEach text file contains an answer to one short question; these questions are labeled as tasks A-E. For example, Task A asks the question: "What is inheritance in object oriented programming?" Categories of plagiarism Each text file has an associated plagiarism label/category:**1. Plagiarized categories: `cut`, `light`, and `heavy`.*** These categories represent different levels of plagiarized answer texts. `cut` answers copy directly from a source text, `light` answers are based on the source text but include some light rephrasing, and `heavy` answers are based on the source text, but *heavily* rephrased (and will likely be the most challenging kind of plagiarism to detect). **2. Non-plagiarized category: `non`.** * `non` indicates that an answer is not plagiarized; the Wikipedia source text is not used to create this answer. **3. Special, source text category: `orig`.*** This is a specific category for the original, Wikipedia source text. We will use these files only for comparison purposes. --- Pre-Process the DataIn the next few cells, you'll be tasked with creating a new DataFrame of desired information about all of the files in the `data/` directory. This will prepare the data for feature extraction and for training a binary, plagiarism classifier. EXERCISE: Convert categorical to numerical dataYou'll notice that the `Category` column in the data, contains string or categorical values, and to prepare these for feature extraction, we'll want to convert these into numerical values. Additionally, our goal is to create a binary classifier and so we'll need a binary class label that indicates whether an answer text is plagiarized (1) or not (0). Complete the below function `numerical_dataframe` that reads in a `file_information.csv` file by name, and returns a *new* DataFrame with a numerical `Category` column and a new `Class` column that labels each answer as plagiarized or not. Your function should return a new DataFrame with the following properties:* 4 columns: `File`, `Task`, `Category`, `Class`. The `File` and `Task` columns can remain unchanged from the original `.csv` file.* Convert all `Category` labels to numerical labels according to the following rules (a higher value indicates a higher degree of plagiarism): * 0 = `non` * 1 = `heavy` * 2 = `light` * 3 = `cut` * -1 = `orig`, this is a special value that indicates an original file.* For the new `Class` column * Any answer text that is not plagiarized (`non`) should have the class label `0`. * Any plagiarized answer texts should have the class label `1`. * And any `orig` texts will have a special label `-1`. Expected outputAfter running your function, you should get a DataFrame with rows that looks like the following: ``` File Task Category Class0 g0pA_taska.txt a 0 01 g0pA_taskb.txt b 3 12 g0pA_taskc.txt c 2 13 g0pA_taskd.txt d 1 14 g0pA_taske.txt e 0 0......99 orig_taske.txt e -1 -1``` | # Read in a csv file and return a transformed dataframe
def numerical_dataframe(csv_file='data/file_information.csv'):
'''Reads in a csv file which is assumed to have `File`, `Category` and `Task` columns.
This function does two things:
1) converts `Category` column values to numerical values
2) Adds a new, numerical `Class` label column.
The `Class` column will label plagiarized answers as 1 and non-plagiarized as 0.
Source texts have a special label, -1.
:param csv_file: The directory for the file_information.csv file
:return: A dataframe with numerical categories and a new `Class` label column'''
# your code here
df = pd.read_csv(csv_file)
df['Class'] = df["Category"].map({'non':0,'heavy':1,'light':1,'cut':1,'orig':-1})
df['Category'] = df['Category'].map({'non':0,'heavy':1,'light':2,'cut':3,'orig':-1})
return df
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Test cellsBelow are a couple of test cells. The first is an informal test where you can check that your code is working as expected by calling your function and printing out the returned result.The **second** cell below is a more rigorous test cell. The goal of a cell like this is to ensure that your code is working as expected, and to form any variables that might be used in _later_ tests/code, in this case, the data frame, `transformed_df`.> The cells in this notebook should be run in chronological order (the order they appear in the notebook). This is especially important for test cells.Often, later cells rely on the functions, imports, or variables defined in earlier cells. For example, some tests rely on previous tests to work.These tests do not test all cases, but they are a great way to check that you are on the right track! | # informal testing, print out the results of a called function
# create new `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
# check that all categories of plagiarism have a class label = 1
transformed_df.head(10)
# test cell that creates `transformed_df`, if tests are passed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# importing tests
import problem_unittests as tests
# test numerical_dataframe function
tests.test_numerical_df(numerical_dataframe)
# if above test is passed, create NEW `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
print('\nExample data: ')
transformed_df.head() | Tests Passed!
Example data:
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Text Processing & Splitting DataRecall that the goal of this project is to build a plagiarism classifier. At it's heart, this task is a comparison text; one that looks at a given answer and a source text, compares them and predicts whether an answer has plagiarized from the source. To effectively do this comparison, and train a classifier we'll need to do a few more things: pre-process all of our text data and prepare the text files (in this case, the 95 answer files and 5 original source files) to be easily compared, and split our data into a `train` and `test` set that can be used to train a classifier and evaluate it, respectively. To this end, you've been provided code that adds additional information to your `transformed_df` from above. The next two cells need not be changed; they add two additional columns to the `transformed_df`:1. A `Text` column; this holds all the lowercase text for a `File`, with extraneous punctuation removed.2. A `Datatype` column; this is a string value `train`, `test`, or `orig` that labels a data point as part of our train or test setThe details of how these additional columns are created can be found in the `helpers.py` file in the project directory. You're encouraged to read through that file to see exactly how text is processed and how data is split.Run the cells below to get a `complete_df` that has all the information you need to proceed with plagiarism detection and feature engineering. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create a text column
text_df = helpers.create_text_column(transformed_df)
text_df.head()
# after running the cell above
# check out the processed text for a single file, by row index
row_idx = 0 # feel free to change this index
sample_text = text_df.iloc[0]['Text']
print('Sample processed text:\n\n', sample_text) | Sample processed text:
inheritance is a basic concept of object oriented programming where the basic idea is to create new classes that add extra detail to existing classes this is done by allowing the new classes to reuse the methods and variables of the existing classes and new methods and classes are added to specialise the new class inheritance models the is kind of relationship between entities or objects for example postgraduates and undergraduates are both kinds of student this kind of relationship can be visualised as a tree structure where student would be the more general root node and both postgraduate and undergraduate would be more specialised extensions of the student node or the child nodes in this relationship student would be known as the superclass or parent class whereas postgraduate would be known as the subclass or child class because the postgraduate class extends the student class inheritance can occur on several layers where if visualised would display a larger tree structure for example we could further extend the postgraduate node by adding two extra extended classes to it called msc student and phd student as both these types of student are kinds of postgraduate student this would mean that both the msc student and phd student classes would inherit methods and variables from both the postgraduate and student classes
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Split data into training and test setsThe next cell will add a `Datatype` column to a given DataFrame to indicate if the record is: * `train` - Training data, for model training.* `test` - Testing data, for model evaluation.* `orig` - The task's original answer from wikipedia. Stratified samplingThe given code uses a helper function which you can view in the `helpers.py` file in the main project directory. This implements [stratified random sampling](https://en.wikipedia.org/wiki/Stratified_sampling) to randomly split data by task & plagiarism amount. Stratified sampling ensures that we get training and test data that is fairly evenly distributed across task & plagiarism combinations. Approximately 26% of the data is held out for testing and 74% of the data is used for training.The function **train_test_dataframe** takes in a DataFrame that it assumes has `Task` and `Category` columns, and, returns a modified frame that indicates which `Datatype` (train, test, or orig) a file falls into. This sampling will change slightly based on a passed in *random_seed*. Due to a small sample size, this stratified random sampling will provide more stable results for a binary plagiarism classifier. Stability here is smaller *variance* in the accuracy of classifier, given a random seed. | random_seed = 1 # can change; set for reproducibility
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create new df with Datatype (train, test, orig) column
# pass in `text_df` from above to create a complete dataframe, with all the information you need
complete_df = helpers.train_test_dataframe(text_df, random_seed=random_seed)
# check results
complete_df.head(10) | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Determining PlagiarismNow that you've prepared this data and created a `complete_df` of information, including the text and class associated with each file, you can move on to the task of extracting similarity features that will be useful for plagiarism classification. > Note: The following code exercises, assume that the `complete_df` as it exists now, will **not** have its existing columns modified. The `complete_df` should always include the columns: `['File', 'Task', 'Category', 'Class', 'Text', 'Datatype']`. You can add additional columns, and you can create any new DataFrames you need by copying the parts of the `complete_df` as long as you do not modify the existing values, directly.--- Similarity Features One of the ways we might go about detecting plagiarism, is by computing **similarity features** that measure how similar a given answer text is as compared to the original wikipedia source text (for a specific task, a-e). The similarity features you will use are informed by [this paper on plagiarism detection](https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c412841_developing-a-corpus-of-plagiarised-short-answers/developing-a-corpus-of-plagiarised-short-answers.pdf). > In this paper, researchers created features called **containment** and **longest common subsequence**. Using these features as input, you will train a model to distinguish between plagiarized and not-plagiarized text files. Feature EngineeringLet's talk a bit more about the features we want to include in a plagiarism detection model and how to calculate such features. In the following explanations, I'll refer to a submitted text file as a **Student Answer Text (A)** and the original, wikipedia source file (that we want to compare that answer to) as the **Wikipedia Source Text (S)**. ContainmentYour first task will be to create **containment features**. To understand containment, let's first revisit a definition of [n-grams](https://en.wikipedia.org/wiki/N-gram). An *n-gram* is a sequential word grouping. For example, in a line like "bayes rule gives us a way to combine prior knowledge with new information," a 1-gram is just one word, like "bayes." A 2-gram might be "bayes rule" and a 3-gram might be "combine prior knowledge."> Containment is defined as the **intersection** of the n-gram word count of the Wikipedia Source Text (S) with the n-gram word count of the Student Answer Text (S) *divided* by the n-gram word count of the Student Answer Text.$$ \frac{\sum{count(\text{ngram}_{A}) \cap count(\text{ngram}_{S})}}{\sum{count(\text{ngram}_{A})}} $$If the two texts have no n-grams in common, the containment will be 0, but if _all_ their n-grams intersect then the containment will be 1. Intuitively, you can see how having longer n-gram's in common, might be an indication of cut-and-paste plagiarism. In this project, it will be up to you to decide on the appropriate `n` or several `n`'s to use in your final model. EXERCISE: Create containment featuresGiven the `complete_df` that you've created, you should have all the information you need to compare any Student Answer Text (A) with its appropriate Wikipedia Source Text (S). An answer for task A should be compared to the source text for task A, just as answers to tasks B, C, D, and E should be compared to the corresponding original source text.In this exercise, you'll complete the function, `calculate_containment` which calculates containment based upon the following parameters:* A given DataFrame, `df` (which is assumed to be the `complete_df` from above)* An `answer_filename`, such as 'g0pB_taskd.txt' * An n-gram length, `n` Containment calculationThe general steps to complete this function are as follows:1. From *all* of the text files in a given `df`, create an array of n-gram counts; it is suggested that you use a [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) for this purpose.2. Get the processed answer and source texts for the given `answer_filename`.3. Calculate the containment between an answer and source text according to the following equation. >$$ \frac{\sum{count(\text{ngram}_{A}) \cap count(\text{ngram}_{S})}}{\sum{count(\text{ngram}_{A})}} $$ 4. Return that containment value.You are encouraged to write any helper functions that you need to complete the function below. | # Calculate the ngram containment for one answer file/source file pair in a df
from sklearn.feature_extraction.text import CountVectorizer
def calculate_containment(df, n, answer_filename):
'''Calculates the containment between a given answer text and its associated source text.
This function creates a count of ngrams (of a size, n) for each text file in our data.
Then calculates the containment by finding the ngram count for a given answer text,
and its associated source text, and calculating the normalized intersection of those counts.
:param df: A dataframe with columns,
'File', 'Task', 'Category', 'Class', 'Text', and 'Datatype'
:param n: An integer that defines the ngram size
:param answer_filename: A filename for an answer text in the df, ex. 'g0pB_taskd.txt'
:return: A single containment value that represents the similarity
between an answer text and its source text.
'''
# your code here
df_answer = df.loc[df['File']==answer_filename]
task = df_answer['Task'].values[0]
answer_text = df_answer['Text'].values[0]
source_text = df.loc[(df['Task']==task) & (df['Datatype']=='orig')]['Text'].values[0]
counts = CountVectorizer(analyzer='word', ngram_range=(n,n))
ngrams = counts.fit_transform([answer_text, source_text])
ngram_array = ngrams.toarray()
answer_ngram = ngram_array[0]
source_ngram = ngram_array[1]
#print(answer_ngram, source_ngram)
containment = 0
for i,j in zip(answer_ngram, source_ngram):
containment += min(i,j)
return containment/sum(answer_ngram)
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Test cellsAfter you've implemented the containment function, you can test out its behavior. The cell below iterates through the first few files, and calculates the original category _and_ containment values for a specified n and file.>If you've implemented this correctly, you should see that the non-plagiarized have low or close to 0 containment values and that plagiarized examples have higher containment values, closer to 1.Note what happens when you change the value of n. I recommend applying your code to multiple files and comparing the resultant containment values. You should see that the highest containment values correspond to files with the highest category (`cut`) of plagiarism level. | # select a value for n
n = 3
# indices for first few files
test_indices = range(5)
# iterate through files and calculate containment
category_vals = []
containment_vals = []
for i in test_indices:
# get level of plagiarism for a given file index
category_vals.append(complete_df.loc[i, 'Category'])
# calculate containment for given file and n
filename = complete_df.loc[i, 'File']
c = calculate_containment(complete_df, n, filename)
containment_vals.append(c)
# print out result, does it make sense?
print('Original category values: \n', category_vals)
print()
print(str(n)+'-gram containment values: \n', containment_vals)
# run this test cell
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test containment calculation
# params: complete_df from before, and containment function
tests.test_containment(complete_df, calculate_containment) | Tests Passed!
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
QUESTION 1: Why can we calculate containment features across *all* data (training & test), prior to splitting the DataFrame for modeling? That is, what about the containment calculation means that the test and training data do not influence each other? **Answer:**Testing and training data need to be split while training a model. Since containment calculation does not involve training a model but just preprocessing of data that is required for both testing and training data, it doesnot matter that the data is not split. --- Longest Common SubsequenceContainment a good way to find overlap in word usage between two documents; it may help identify cases of cut-and-paste as well as paraphrased levels of plagiarism. Since plagiarism is a fairly complex task with varying levels, it's often useful to include other measures of similarity. The paper also discusses a feature called **longest common subsequence**.> The longest common subsequence is the longest string of words (or letters) that are *the same* between the Wikipedia Source Text (S) and the Student Answer Text (A). This value is also normalized by dividing by the total number of words (or letters) in the Student Answer Text. In this exercise, we'll ask you to calculate the longest common subsequence of words between two texts. EXERCISE: Calculate the longest common subsequenceComplete the function `lcs_norm_word`; this should calculate the *longest common subsequence* of words between a Student Answer Text and corresponding Wikipedia Source Text. It may be helpful to think of this in a concrete example. A Longest Common Subsequence (LCS) problem may look as follows:* Given two texts: text A (answer text) of length n, and string S (original source text) of length m. Our goal is to produce their longest common subsequence of words: the longest sequence of words that appear left-to-right in both texts (though the words don't have to be in continuous order).* Consider: * A = "i think pagerank is a link analysis algorithm used by google that uses a system of weights attached to each element of a hyperlinked set of documents" * S = "pagerank is a link analysis algorithm used by the google internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents"* In this case, we can see that the start of each sentence of fairly similar, having overlap in the sequence of words, "pagerank is a link analysis algorithm used by" before diverging slightly. Then we **continue moving left -to-right along both texts** until we see the next common sequence; in this case it is only one word, "google". Next we find "that" and "a" and finally the same ending "to each element of a hyperlinked set of documents".* Below, is a clear visual of how these sequences were found, sequentially, in each text.* Now, those words appear in left-to-right order in each document, sequentially, and even though there are some words in between, we count this as the longest common subsequence between the two texts. * If I count up each word that I found in common I get the value 20. **So, LCS has length 20**. * Next, to normalize this value, divide by the total length of the student answer; in this example that length is only 27. **So, the function `lcs_norm_word` should return the value `20/27` or about `0.7408`.**In this way, LCS is a great indicator of cut-and-paste plagiarism or if someone has referenced the same source text multiple times in an answer. LCS, dynamic programmingIf you read through the scenario above, you can see that this algorithm depends on looking at two texts and comparing them word by word. You can solve this problem in multiple ways. First, it may be useful to `.split()` each text into lists of comma separated words to compare. Then, you can iterate through each word in the texts and compare them, adding to your value for LCS as you go. The method I recommend for implementing an efficient LCS algorithm is: using a matrix and dynamic programming. **Dynamic programming** is all about breaking a larger problem into a smaller set of subproblems, and building up a complete result without having to repeat any subproblems. This approach assumes that you can split up a large LCS task into a combination of smaller LCS tasks. Let's look at a simple example that compares letters:* A = "ABCD"* S = "BD"We can see right away that the longest subsequence of _letters_ here is 2 (B and D are in sequence in both strings). And we can calculate this by looking at relationships between each letter in the two strings, A and S.Here, I have a matrix with the letters of A on top and the letters of S on the left side:This starts out as a matrix that has as many columns and rows as letters in the strings S and O **+1** additional row and column, filled with zeros on the top and left sides. So, in this case, instead of a 2x4 matrix it is a 3x5.Now, we can fill this matrix up by breaking it into smaller LCS problems. For example, let's first look at the shortest substrings: the starting letter of A and S. We'll first ask, what is the Longest Common Subsequence between these two letters "A" and "B"? **Here, the answer is zero and we fill in the corresponding grid cell with that value.**Then, we ask the next question, what is the LCS between "AB" and "B"?**Here, we have a match, and can fill in the appropriate value 1**.If we continue, we get to a final matrix that looks as follows, with a **2** in the bottom right corner.The final LCS will be that value **2** *normalized* by the number of n-grams in A. So, our normalized value is 2/4 = **0.5**. The matrix rulesOne thing to notice here is that, you can efficiently fill up this matrix one cell at a time. Each grid cell only depends on the values in the grid cells that are directly on top and to the left of it, or on the diagonal/top-left. The rules are as follows:* Start with a matrix that has one extra row and column of zeros.* As you traverse your string: * If there is a match, fill that grid cell with the value to the top-left of that cell *plus* one. So, in our case, when we found a matching B-B, we added +1 to the value in the top-left of the matching cell, 0. * If there is not a match, take the *maximum* value from either directly to the left or the top cell, and carry that value over to the non-match cell.After completely filling the matrix, **the bottom-right cell will hold the non-normalized LCS value**.This matrix treatment can be applied to a set of words instead of letters. Your function should apply this to the words in two texts and return the normalized LCS value. | # Compute the normalized LCS given an answer text and a source text
def lcs_norm_word(answer_text, source_text):
'''Computes the longest common subsequence of words in two texts; returns a normalized value.
:param answer_text: The pre-processed text for an answer text
:param source_text: The pre-processed text for an answer's associated source text
:return: A normalized LCS value'''
# your code here
ans_words = answer_text.split()
src_words = source_text.split()
ans_length = len(ans_words)
src_length = len(src_words)
lcs = np.zeros((ans_length + 1, src_length+1),dtype=int)
for i, ans in enumerate(ans_words,1):
for j, src in enumerate(src_words,1):
if ans == src:
lcs[i][j] = lcs[i-1][j-1] +1
else:
lcs[i][j] = max(lcs[i-1][j], lcs[i][j-1])
lcs_res = lcs[i][j]/i
return lcs_res | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Test cellsLet's start by testing out your code on the example given in the initial description.In the below cell, we have specified strings A (answer text) and S (original source text). We know that these texts have 20 words in common and the submitted answer is 27 words long, so the normalized, longest common subsequence should be 20/27. | # Run the test scenario from above
# does your function return the expected value?
A = "i think pagerank is a link analysis algorithm used by google that uses a system of weights attached to each element of a hyperlinked set of documents"
S = "pagerank is a link analysis algorithm used by the google internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents"
# calculate LCS
lcs = lcs_norm_word(A, S)
print('LCS = ', lcs)
# expected value test
assert lcs==20/27., "Incorrect LCS value, expected about 0.7408, got "+str(lcs)
print('Test passed!') | LCS = 0.7407407407407407
Test passed!
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
This next cell runs a more rigorous test. | # run test cell
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test lcs implementation
# params: complete_df from before, and lcs_norm_word function
tests.test_lcs(complete_df, lcs_norm_word) | Tests Passed!
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Finally, take a look at a few resultant values for `lcs_norm_word`. Just like before, you should see that higher values correspond to higher levels of plagiarism. | # test on your own
test_indices = range(5) # look at first few files
category_vals = []
lcs_norm_vals = []
# iterate through first few docs and calculate LCS
for i in test_indices:
category_vals.append(complete_df.loc[i, 'Category'])
# get texts to compare
answer_text = complete_df.loc[i, 'Text']
task = complete_df.loc[i, 'Task']
# we know that source texts have Class = -1
orig_rows = complete_df[(complete_df['Class'] == -1)]
orig_row = orig_rows[(orig_rows['Task'] == task)]
source_text = orig_row['Text'].values[0]
# calculate lcs
lcs_val = lcs_norm_word(answer_text, source_text)
lcs_norm_vals.append(lcs_val)
# print out result, does it make sense?
print('Original category values: \n', category_vals)
print()
print('Normalized LCS values: \n', lcs_norm_vals) | Original category values:
[0, 3, 2, 1, 0]
Normalized LCS values:
[0.1917808219178082, 0.8207547169811321, 0.8464912280701754, 0.3160621761658031, 0.24257425742574257]
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
--- Create All FeaturesNow that you've completed the feature calculation functions, it's time to actually create multiple features and decide on which ones to use in your final model! In the below cells, you're provided two helper functions to help you create multiple features and store those in a DataFrame, `features_df`. Creating multiple containment featuresYour completed `calculate_containment` function will be called in the next cell, which defines the helper function `create_containment_features`. > This function returns a list of containment features, calculated for a given `n` and for *all* files in a df (assumed to the the `complete_df`).For our original files, the containment value is set to a special value, -1.This function gives you the ability to easily create several containment features, of different n-gram lengths, for each of our text files. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Function returns a list of containment features, calculated for a given n
# Should return a list of length 100 for all files in a complete_df
def create_containment_features(df, n, column_name=None):
containment_values = []
if(column_name==None):
column_name = 'c_'+str(n) # c_1, c_2, .. c_n
# iterates through dataframe rows
for i in df.index:
file = df.loc[i, 'File']
# Computes features using calculate_containment function
if df.loc[i,'Category'] > -1:
c = calculate_containment(df, n, file)
containment_values.append(c)
# Sets value to -1 for original tasks
else:
containment_values.append(-1)
print(str(n)+'-gram containment features created!')
return containment_values
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Creating LCS featuresBelow, your complete `lcs_norm_word` function is used to create a list of LCS features for all the answer files in a given DataFrame (again, this assumes you are passing in the `complete_df`. It assigns a special value for our original, source files, -1. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Function creates lcs feature and add it to the dataframe
def create_lcs_features(df, column_name='lcs_word'):
lcs_values = []
# iterate through files in dataframe
for i in df.index:
# Computes LCS_norm words feature using function above for answer tasks
if df.loc[i,'Category'] > -1:
# get texts to compare
answer_text = df.loc[i, 'Text']
task = df.loc[i, 'Task']
# we know that source texts have Class = -1
orig_rows = df[(df['Class'] == -1)]
orig_row = orig_rows[(orig_rows['Task'] == task)]
source_text = orig_row['Text'].values[0]
# calculate lcs
lcs = lcs_norm_word(answer_text, source_text)
lcs_values.append(lcs)
# Sets to -1 for original tasks
else:
lcs_values.append(-1)
print('LCS features created!')
return lcs_values
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
EXERCISE: Create a features DataFrame by selecting an `ngram_range`The paper suggests calculating the following features: containment *1-gram to 5-gram* and *longest common subsequence*. > In this exercise, you can choose to create even more features, for example from *1-gram to 7-gram* containment features and *longest common subsequence*. You'll want to create at least 6 features to choose from as you think about which to give to your final, classification model. Defining and comparing at least 6 different features allows you to discard any features that seem redundant, and choose to use the best features for your final model!In the below cell **define an n-gram range**; these will be the n's you use to create n-gram containment features. The rest of the feature creation code is provided. | # Define an ngram range
ngram_range = range(1,7)
# The following code may take a minute to run, depending on your ngram_range
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
features_list = []
# Create features in a features_df
all_features = np.zeros((len(ngram_range)+1, len(complete_df)))
# Calculate features for containment for ngrams in range
i=0
for n in ngram_range:
column_name = 'c_'+str(n)
features_list.append(column_name)
# create containment features
all_features[i]=np.squeeze(create_containment_features(complete_df, n))
i+=1
# Calculate features for LCS_Norm Words
features_list.append('lcs_word')
all_features[i]= np.squeeze(create_lcs_features(complete_df))
# create a features dataframe
features_df = pd.DataFrame(np.transpose(all_features), columns=features_list)
# Print all features/columns
print()
print('Features: ', features_list)
print()
# print some results
features_df.head(10) | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Correlated FeaturesYou should use feature correlation across the *entire* dataset to determine which features are ***too*** **highly-correlated** with each other to include both features in a single model. For this analysis, you can use the *entire* dataset due to the small sample size we have. All of our features try to measure the similarity between two texts. Since our features are designed to measure similarity, it is expected that these features will be highly-correlated. Many classification models, for example a Naive Bayes classifier, rely on the assumption that features are *not* highly correlated; highly-correlated features may over-inflate the importance of a single feature. So, you'll want to choose your features based on which pairings have the lowest correlation. These correlation values range between 0 and 1; from low to high correlation, and are displayed in a [correlation matrix](https://www.displayr.com/what-is-a-correlation-matrix/), below. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Create correlation matrix for just Features to determine different models to test
corr_matrix = features_df.corr().abs().round(2)
# display shows all of a dataframe
display(corr_matrix) | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
EXERCISE: Create selected train/test dataComplete the `train_test_data` function below. This function should take in the following parameters:* `complete_df`: A DataFrame that contains all of our processed text data, file info, datatypes, and class labels* `features_df`: A DataFrame of all calculated features, such as containment for ngrams, n= 1-5, and lcs values for each text file listed in the `complete_df` (this was created in the above cells)* `selected_features`: A list of feature column names, ex. `['c_1', 'lcs_word']`, which will be used to select the final features in creating train/test sets of data.It should return two tuples:* `(train_x, train_y)`, selected training features and their corresponding class labels (0/1)* `(test_x, test_y)`, selected training features and their corresponding class labels (0/1)** Note: x and y should be arrays of feature values and numerical class labels, respectively; not DataFrames.**Looking at the above correlation matrix, you should decide on a **cutoff** correlation value, less than 1.0, to determine which sets of features are *too* highly-correlated to be included in the final training and test data. If you cannot find features that are less correlated than some cutoff value, it is suggested that you increase the number of features (longer n-grams) to choose from or use *only one or two* features in your final model to avoid introducing highly-correlated features.Recall that the `complete_df` has a `Datatype` column that indicates whether data should be `train` or `test` data; this should help you split the data appropriately. | # Takes in dataframes and a list of selected features (column names)
# and returns (train_x, train_y), (test_x, test_y)
def train_test_data(complete_df, features_df, selected_features):
'''Gets selected training and test features from given dataframes, and
returns tuples for training and test features and their corresponding class labels.
:param complete_df: A dataframe with all of our processed text data, datatypes, and labels
:param features_df: A dataframe of all computed, similarity features
:param selected_features: An array of selected features that correspond to certain columns in `features_df`
:return: training and test features and labels: (train_x, train_y), (test_x, test_y)'''
df = pd.concat([complete_df, features_df[selected_features]], axis=1)
df_train = df[df['Datatype']=='train']
df_test = df[df['Datatype']=='test']
# get the training features
train_x = df_train[selected_features].values
# And training class labels (0 or 1)
train_y = df_train['Class'].values
# get the test features and labels
test_x = df_test[selected_features].values
test_y = df_test['Class'].values
return (train_x, train_y), (test_x, test_y)
| _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Test cellsBelow, test out your implementation and create the final train/test data. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
test_selection = list(features_df)[:2] # first couple columns as a test
# test that the correct train/test data is created
(train_x, train_y), (test_x, test_y) = train_test_data(complete_df, features_df, test_selection)
# params: generated train/test data
tests.test_data_split(train_x, train_y, test_x, test_y) | Tests Passed!
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
EXERCISE: Select "good" featuresIf you passed the test above, you can create your own train/test data, below. Define a list of features you'd like to include in your final mode, `selected_features`; this is a list of the features names you want to include. | # Select your list of features, this should be column names from features_df
# ex. ['c_1', 'lcs_word']
selected_features = ['c_1', 'c_5', 'lcs_word']
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
(train_x, train_y), (test_x, test_y) = train_test_data(complete_df, features_df, selected_features)
# check that division of samples seems correct
# these should add up to 95 (100 - 5 original files)
print('Training size: ', len(train_x))
print('Test size: ', len(test_x))
print()
print('Training df sample: \n', train_x[:10]) | Training size: 70
Test size: 25
Training df sample:
[[0.39814815 0. 0.19178082]
[0.86936937 0.44954128 0.84649123]
[0.59358289 0.08196721 0.31606218]
[0.54450262 0. 0.24257426]
[0.32950192 0. 0.16117216]
[0.59030837 0. 0.30165289]
[0.75977654 0.24571429 0.48430493]
[0.51612903 0. 0.27083333]
[0.44086022 0. 0.22395833]
[0.97945205 0.78873239 0.9 ]]
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Question 2: How did you decide on which features to include in your final model? **Answer:**lcs is one of the most important features so I have selected that. Also, its important to assess the number of words in common, so I have selected c_1 and c_5 because the correlation factor is 1 with c_4 and c_6 and also close to 1 for c_2 and c_3 --- Creating Final Data FilesNow, you are almost ready to move on to training a model in SageMaker!You'll want to access your train and test data in SageMaker and upload it to S3. In this project, SageMaker will expect the following format for your train/test data:* Training and test data should be saved in one `.csv` file each, ex `train.csv` and `test.csv`* These files should have class labels in the first column and features in the rest of the columnsThis format follows the practice, outlined in the [SageMaker documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html), which reads: "Amazon SageMaker requires that a CSV file doesn't have a header record and that the target variable [class label] is in the first column." EXERCISE: Create csv filesDefine a function that takes in x (features) and y (labels) and saves them to one `.csv` file at the path `data_dir/filename`.It may be useful to use pandas to merge your features and labels into one DataFrame and then convert that into a csv file. You can make sure to get rid of any incomplete rows, in a DataFrame, by using `dropna`. | def make_csv(x, y, filename, data_dir):
'''Merges features and labels and converts them into one csv file with labels in the first column.
:param x: Data features
:param y: Data labels
:param file_name: Name of csv file, ex. 'train.csv'
:param data_dir: The directory where files will be saved
'''
# make data dir, if it does not exist
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# your code here
pd.concat([pd.DataFrame(y), pd.DataFrame(x)], axis=1).dropna().to_csv(os.path.join(data_dir, filename), header=False, index=False)
# nothing is returned, but a print statement indicates that the function has run
print('Path created: '+str(data_dir)+'/'+str(filename)) | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Test cellsTest that your code produces the correct format for a `.csv` file, given some text features and labels. | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
fake_x = [ [0.39814815, 0.0001, 0.19178082],
[0.86936937, 0.44954128, 0.84649123],
[0.44086022, 0., 0.22395833] ]
fake_y = [0, 1, 1]
make_csv(fake_x, fake_y, filename='to_delete.csv', data_dir='test_csv')
# read in and test dimensions
fake_df = pd.read_csv('test_csv/to_delete.csv', header=None)
# check shape
assert fake_df.shape==(3, 4), \
'The file should have as many rows as data_points and as many columns as features+1 (for indices).'
# check that first column = labels
assert np.all(fake_df.iloc[:,0].values==fake_y), 'First column is not equal to the labels, fake_y.'
print('Tests passed!')
# delete the test csv file, generated above
! rm -rf test_csv | _____no_output_____ | MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
If you've passed the tests above, run the following cell to create `train.csv` and `test.csv` files in a directory that you specify! This will save the data in a local directory. Remember the name of this directory because you will reference it again when uploading this data to S3. | # can change directory, if you want
data_dir = 'plagiarism_data'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
make_csv(train_x, train_y, filename='train.csv', data_dir=data_dir)
make_csv(test_x, test_y, filename='test.csv', data_dir=data_dir) | Path created: plagiarism_data/train.csv
Path created: plagiarism_data/test.csv
| MIT | Project_Plagiarism_Detection/2_Plagiarism_Feature_Engineering.ipynb | pooja63/Plagiarism-detection |
Μέρος Α | green_tif = gdal.Open("./GBDA2020_ML1/partA/green.tif")
green_data = green_tif.ReadAsArray().flatten()
nir_tif = gdal.Open("./GBDA2020_ML1/partA/nir.tif")
nir_data = nir_tif.ReadAsArray().flatten()
labels_tif = gdal.Open("./GBDA2020_ML1/partA/gt.tif")
labels_data = labels_tif.ReadAsArray().flatten()
labels_data = np.where(labels_data==255,1,labels_data)
data = np.concatenate((np.expand_dims(green_data,axis=1),np.expand_dims(nir_data,axis=1)),axis=1)
# AM: 03400121
# train-test set, 70-30 %
X_train, X_test, y_train, y_test = train_test_split(data, labels_data, test_size=0.3, random_state=121) | _____no_output_____ | MIT | GBDA_exercise_5.ipynb | giorgosouz/HSI-supervised-classification |
Αρχικοποιώ τα μοντέλα του πρώτου ερωτήματος, τα εκπαιδεύω και εμφανίζω τις απαιτούμενες μετρικές αξιολόγησης. | my_clfs = {
"Gaussian Naive Bayes" : GaussianNB(),
"K-Nearest Neighbors" : KNeighborsClassifier(n_neighbors=3,n_jobs=-1),
"Simple Perceptron" : Perceptron(tol=1e-3, random_state=121,n_jobs=-1)
}
for title, clf in my_clfs.items():
print(title)
clf.fit(X_train,y_train)
pred = clf.predict(X_test)
print("Accuracy :", clf.score(X_test, y_test))
tn, fp, fn, tp = confusion_matrix(y_test, pred).ravel()
prec, rec, f1, sup = precision_recall_fscore_support(y_test, pred,average='binary')
print("Precision :", prec)
print("Recall :", rec)
print("F1 Score :", f1)
print("True Positives :", tp)
print("True Negatives :", tn)
print("False Positives :",fp)
print("False Negatives :",fn)
fig, ax = plt.subplots(figsize=(12, 6))
ax.set_title(title)
plot_confusion_matrix(clf, X_test, y_test, display_labels=["Land","Water"],ax=ax)
plt.show()
| Gaussian Naive Bayes
Accuracy : 0.9980966666666666
Precision : 0.9997233383835057
Recall : 0.9928042494177983
F1 Score : 0.9962517805683377
True Positives : 75884
True Negatives : 223545
False Positives : 21
False Negatives : 550
| MIT | GBDA_exercise_5.ipynb | giorgosouz/HSI-supervised-classification |
Με βάση τα αποτελέσματα που δίνουν οι αλγόριθμοι είναι εμφανές πως τα δεδομένα μας είναι καλώς ορισμένα και μπορούν να διαχωρίσουν τις δυο κλάσεις τέλεια. Τα αποτελέσματα είναι τέλεια με μόνο τον Gaussian classifier να πραγματοποιεί ελάχιστα λάθη. Το πρόβλημα που καλούνται να λύσουν οι αλγόριθμοι είναι σχετικά απλό, ενώ η παρουσία μεγάλου αριθμού δεδομένων για την εκπαίδευση καθιστά δυνατά αυτά τα ποσοστά επιτυχίας. Μέρος Β Φορτώνω και επεξεργάζομαι τα δεδομένα του δεύτερου ερωτήματος. Αφαιρείται η κλάση 0 από τα data και τα labels, και οι κλάσεις για λόγους ευκολίας στη συνέχεια μετατρέπονται από 1-16 σε 0-15. | data_b = np.load("./GBDA2020_ML1/partB/indianpinearray.npy").transpose().reshape((200,-1)).transpose().astype(np.int16)
labels_b = np.load("./GBDA2020_ML1/partB/IPgt.npy").flatten().astype(np.int16)
clean_data = np.delete(data_b,np.where(labels_b==0),0)
clean_labels = np.delete(labels_b,np.where(labels_b==0),0)
clean_labels = clean_labels-1 | _____no_output_____ | MIT | GBDA_exercise_5.ipynb | giorgosouz/HSI-supervised-classification |
Για τα μοντέλα που θα ζητούνται να εξεταστούν είναι απαραίτητη η προεπεξέργασία των δεδομένων. Συγκεκριμένα γίνεται κανονικοποίηση των δεδομένων. Δύο επιλογές εξετάστηκαν:1. Κανονικοποίηση Max-Min δίνονται νέες τιμές στα δεδομένα στο διάστημα [0,1]2. Κανονικοποίηση χρησιμοποιώντας τη μέση τιμή και τη διασπορά του κάθε χαρακτηριστικού ώστε να ακολουθούν τα δεδομένα την κανονική κατανομή ανά χαρακτηριστικό.Επιλέχθηκε η δεύτερη προσέγγιση καθώς έδωσε καλύτερα αποτελέσματα. | # scaler = MinMaxScaler()
scaler = StandardScaler()
scaler.fit(clean_data)
scaled_data = scaler.transform(clean_data) | _____no_output_____ | MIT | GBDA_exercise_5.ipynb | giorgosouz/HSI-supervised-classification |
Για τα μοντέλα MLP χρειάστηκε και ένα validation set. Για να είναι δυνατή η σύγκριση μεταξύ όλων των εξεταζόμενων μοντέλων, χρησιμοποιήθηκαν τα ίδια δεδομένα εκπαίδευσης και ελέγχου σε όλες τις περιπτώσεις. | # AM: 03400121
# train-val-test 60-10-30 ,
X_train, X_test, y_train, y_test = train_test_split(scaled_data, clean_labels, test_size=0.3, random_state=121,stratify=clean_labels)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.143, random_state=121,stratify=y_train) #1/7 of train set or 1/10 of total data | _____no_output_____ | MIT | GBDA_exercise_5.ipynb | giorgosouz/HSI-supervised-classification |
Αρχικοποιώ τα μοντέλα των SVM και RF, τα εκπαιδεύω και εμφανίζω τις απαιτούμενες μετρικές. | my_clfs_b = {
"Support Vector Machine" : SVC(C=50),
"Random Forest" : RandomForestClassifier(criterion="entropy",max_features="sqrt",random_state=121,n_jobs=12),
}
for title, clf in my_clfs_b.items():
print(title)
clf.fit(X_train,y_train)
pred = clf.predict(X_test)
print("Accuracy :", clf.score(X_test, y_test))
prec, rec, f1, sup = precision_recall_fscore_support(y_test, pred)
print("Precision :", prec)
print("Recall :", rec)
print("F1 Score :", f1)
prec_micro, rec_micro, f1_micro, sup_micro = precision_recall_fscore_support(y_test, pred,average='micro')
print("Precision micro:", prec_micro)
print("Recall micro:", rec_micro)
print("F1 Score micro:", f1_micro)
fig, ax = plt.subplots(figsize=(12, 6))
ax.set_title(title)
plot_confusion_matrix(clf, X_test, y_test, ax=ax)
plt.show()
print("----------------------------------------------------------------------")
# break
| Support Vector Machine
Accuracy : 0.6386991869918699
Precision : [0.27777778 0.61571125 0.55597015 0.80519481 0.5112782 0.52763819
0.5 0.7 0.5 0.6294964 0.68970013 0.53157895
0.54054054 0.74185464 0.68918919 0.72727273]
Recall : [0.35714286 0.67757009 0.59839357 0.87323944 0.46896552 0.47945205
0.375 0.63636364 0.5 0.59931507 0.71777476 0.56741573
0.32786885 0.77894737 0.43965517 0.57142857]
F1 Score : [0.3125 0.64516129 0.57640232 0.83783784 0.48920863 0.50239234
0.42857143 0.66666667 0.5 0.61403509 0.70345745 0.54891304
0.40816327 0.75994865 0.53684211 0.64 ]
Precision micro: 0.6386991869918699
Recall micro: 0.6386991869918699
F1 Score micro: 0.6386991869918699
| MIT | GBDA_exercise_5.ipynb | giorgosouz/HSI-supervised-classification |
Για τα μονέλα MLP δημιουργήθηκαν δυο απλά δίκτυα. Το πρώτο αποτελείται από ένα shallow network χωρίς κανένα κρυφό επίπεδο, το οποίο προβάλει την είσοδο των 200 χαρακηριστικών στις 16 κατηγορίες εξόδου. Ως δευτερη αρχιτεκτονική για τα MLP δημιουργήθηκε ένα δίκτυο με 3 κρυφά επίπεδα με 100,50,25 νευρώνες αντίστοιχα. Στο δεύτερο δίκτυο παρατηρήθηκε overfitting γι αυτό χρησιμοποιήθηκε η τεχνική του dropout για να μειωθεί η επίδρασή του. Προς το τέλος του notebook υπάρχουν διαγράμματα για την εκπαίδευση των μοντέλων όπου γίνεται εμφανές το overfitting. | import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
import numpy as np
from torch.utils.data import TensorDataset, DataLoader
class my_MLP_V1(nn.Module):
def __init__(self):
super().__init__()
# self.fc1 = nn.Linear(200,50)
# self.fc2 = nn.Linear(50,25)
self.fc3 = nn.Linear(200,16)
def forward(self,x):
# x = F.relu(self.fc1(x))
# x = F.relu(self.fc2(x))
x = F.log_softmax(self.fc3(x),dim=1)
return x
class my_MLP_V2(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(200,100)
self.fc2 = nn.Linear(100,50)
self.fc3 = nn.Linear(50,25)
self.fc4 = nn.Linear(25,16)
self.dropout = nn.Dropout(p=0.2)
def forward(self,x):
# x = F.relu(self.fc1(x))
# x = F.relu(self.fc2(x))
# x = F.relu(self.fc3(x))
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
x = F.log_softmax(self.fc4(x),dim=1)
return x
# transform to torch tensors
tensor_x_train = torch.from_numpy(X_train).type(torch.FloatTensor)
tensor_y_train = torch.from_numpy(y_train).type(torch.LongTensor)
tensor_x_val = torch.from_numpy(X_val).type(torch.FloatTensor)
tensor_y_val = torch.from_numpy(y_val).type(torch.LongTensor)
tensor_x_test = torch.from_numpy(X_test).type(torch.FloatTensor)
tensor_y_test = torch.from_numpy(y_test).type(torch.LongTensor)
# create train, validation, test datsets and create dataloaders
my_dataset_train = TensorDataset(tensor_x_train,tensor_y_train)
my_dataloader_train = DataLoader(my_dataset_train,batch_size=64, shuffle=True)
my_dataset_val = TensorDataset(tensor_x_val,tensor_y_val)
my_dataloader_val = DataLoader(my_dataset_val,batch_size=64, shuffle=True)
my_dataset_test = TensorDataset(tensor_x_test,tensor_y_test)
my_dataloader_test = DataLoader(my_dataset_test,batch_size=64, shuffle=True)
# mlp train function
def MLP_train_model(model,epochs=300):
optimizer = optim.Adam(model.parameters(), lr=0.001)
criterion = nn.NLLLoss()
train_losses, val_losses = [], []
for e in range(epochs):
tot_train_loss = 0
for images, labels in my_dataloader_train:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
tot_train_loss += loss.item()
else:
tot_val_loss = 0
# Number of correct predictions on the validation set
val_correct = 0
# Turn off gradients for validation and put model on evaluation mode
with torch.no_grad():
model.eval()
for images, labels in my_dataloader_val:
log_ps = model(images)
loss = criterion(log_ps, labels)
tot_val_loss += loss.item()
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
val_correct += equals.sum().item()
# Put model on training mode again
model.train()
# Get mean loss to enable comparison between train and test sets
train_loss = tot_train_loss / len(my_dataloader_train.dataset)
val_loss = tot_val_loss / len(my_dataloader_val.dataset)
# At completion of epoch
train_losses.append(train_loss)
val_losses.append(val_loss)
if (e)%10==9 or e==0:
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(train_loss),
"Validation Loss: {:.3f}.. ".format(val_loss),
"Validation Accuracy: {:.3f}".format(val_correct / len(my_dataloader_val.dataset)))
plt.figure(figsize=(14,7))
plt.plot(train_losses, label='Training loss')
plt.plot(val_losses, label='Validation loss')
plt.legend(frameon=False)
return
model1 = my_MLP_V1()
MLP_train_model(model1)
# without dropout layers we get overfitting
model2 = my_MLP_V2()
MLP_train_model(model2)
# with dropout to counter overfitting
model2 = my_MLP_V2()
MLP_train_model(model2)
# mlp test function and resutls
def MLP_test_results(model):
tot_test_loss = 0
test_correct = 0
all_predictions = []
all_labels = []
optimizer = optim.Adam(model.parameters(), lr=0.001)
criterion = nn.NLLLoss()
with torch.no_grad():
model.eval()
for images, labels in my_dataloader_test:
log_ps = model(images)
loss = criterion(log_ps, labels)
tot_test_loss += loss.item()
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
all_predictions.append(list(top_class.numpy().squeeze()))
all_labels.append(list(labels.numpy().squeeze()))
equals = top_class == labels.view(*top_class.shape)
test_correct += equals.sum().item()
test_loss = tot_test_loss / len(my_dataloader_test.dataset)
print("Test Loss: {:.3f}.. ".format(test_loss),"Test Accuracy: {:.3f}".format(test_correct / len(my_dataloader_test.dataset)))
pred_list = [item for sublist in all_predictions for item in sublist]
labels_list = [item for sublist in all_labels for item in sublist]
prec, rec, f1, sup = precision_recall_fscore_support(labels_list, pred_list)
print("Precision :", prec)
print("Recall :", rec)
print("F1 Score :", f1)
prec_micro, rec_micro, f1_micro, sup_micro = precision_recall_fscore_support(labels_list, pred_list,average='micro')
print("Precision micro:", prec_micro)
print("Recall micro:", rec_micro)
print("F1 Score micro:", f1_micro)
cm=confusion_matrix(labels_list,pred_list)
df_cm = pd.DataFrame(cm, index = range(1,17),
columns = range(1,17))
plt.figure(figsize = (10,8))
sn.heatmap(df_cm, annot=True,fmt='d')
t = plt.yticks(rotation=0)
return
MLP_test_results(model1)
MLP_test_results(model2) | Test Loss: 0.038.. Test Accuracy: 0.631
Precision : [0.33333333 0.64332604 0.546875 0.78666667 0.56557377 0.50490196
0.33333333 0.57228916 0.5 0.63773585 0.69559413 0.56424581
0.33870968 0.75193798 0.52777778 0.51851852]
Recall : [0.21428571 0.68691589 0.562249 0.83098592 0.47586207 0.47031963
0.125 0.66433566 0.5 0.57876712 0.70691995 0.56741573
0.3442623 0.76578947 0.49137931 0.5 ]
F1 Score : [0.26086957 0.66440678 0.55445545 0.80821918 0.51685393 0.48699764
0.18181818 0.61488673 0.5 0.60682226 0.70121131 0.56582633
0.34146341 0.75880052 0.50892857 0.50909091]
Precision micro: 0.631219512195122
Recall micro: 0.631219512195122
F1 Score micro: 0.631219512195122
| MIT | GBDA_exercise_5.ipynb | giorgosouz/HSI-supervised-classification |
MCMC visualizations | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from results import MCMCResults
import sys
sys.path.insert(0, '/Users/bmmorris/git/friedrich')
from friedrich.lightcurve import hat11_params_morris
transit_params = hat11_params_morris()
from corner import corner
transit_params = hat11_params_morris()
m1 = MCMCResults.from_stsp_local('/Users/bmmorris/git/friedrich/hat11/arctic/first_half_mcmc.txt',
'/Users/bmmorris/git/friedrich/hat11/arctic/first_half.dat',
transit_params=hat11_params_morris())
m2 = MCMCResults.from_stsp_local('/Users/bmmorris/git/friedrich/hat11/arctic/second_half_mcmc.txt',
'/Users/bmmorris/git/friedrich/hat11/arctic/second_half.dat',
transit_params=hat11_params_morris())
# fig, ax = m.light_curve.plot_transit() | _____no_output_____ | MIT | example_vis_arctic.ipynb | bmorris3/stsp_osg_results |
Identify the burn-in period | burn_in_step = 2000
for i in range(len(m.chi2_chains)):
plt.plot(np.log(m.chi2_chains[i]), '.', color='k', alpha=0.005)
plt.xlabel('step')
plt.ylabel('$\log\chi^2$')
#plt.axvline(burn_in_step, color='k', lw=2)
plt.show()
flat_radius = []
flat_phi = []
flat_theta = []
for i in range(len(m.chi2_chains)):
flat_radius.append(m.radius_chains[i][burn_in_step:, :])
flat_phi.append(m.phi_chains[i][burn_in_step:, :])
flat_theta.append(m.theta_chains[i][burn_in_step:, :])
flat_radius = np.vstack(flat_radius)
flat_phi = np.vstack(flat_phi)
flat_theta = np.vstack(flat_theta)
from astropy.time import Time
errorbar_color = '#b3b3b3'
fontsize = 16
fig, ax = plt.subplots(1, figsize=(8, 5))
ax.errorbar(m1.light_curve.kepler_lc.times.plot_date, m1.light_curve.fluxes_kepler,
m1.light_curve.kepler_lc.errors, fmt='.',
color='k', ecolor=errorbar_color, capsize=0, label='Kepler')
ax.plot(m1.light_curve.model_lc.times.plot_date, m1.light_curve.fluxes_model, 'r', label='STSP', lw=2)
ax.errorbar(m2.light_curve.kepler_lc.times.plot_date, m2.light_curve.fluxes_kepler,
m2.light_curve.kepler_lc.errors, fmt='.',
color='k', ecolor=errorbar_color, capsize=0, label='Kepler')
ax.plot(m2.light_curve.model_lc.times.plot_date, m2.light_curve.fluxes_model, 'r', label='STSP', lw=2)
label_times = Time(ax.get_xticks(), format='plot_date')
ax.set_xticklabels([lt.strftime("%H:%M") for lt in label_times.datetime])
ax.set_xlabel('Time on {0} UTC'.format(label_times[0].datetime.date()),
fontsize=fontsize)
ax.set_ylabel('Flux', fontsize=fontsize)
ax.axvline(m2.light_curve.kepler_lc.times.plot_date[0])
# ax.set_xlim([m.light_curve.kepler_lc.times.plot_date.min(),
# m.light_curve.kepler_lc.times.plot_date.max()])
fig.savefig('plots/transit_{0:03d}.png'.format(m.window_ind), bbox_inches='tight', dpi=200)
!mkdir arctic_outputs
for m in [m1, m2]:
corner(np.vstack([m.radius.ravel(), m.theta.ravel(), m.phi.ravel()]).T,
labels=['$r$', r'$\theta$', r'$\phi$'])
attrs = ['radius', 'theta', 'phi']
trans = [lambda r: r,
lambda t: np.degrees(np.pi/2 - t),
lambda p: np.degrees(p)]
spots_rad = []
spots_deg = []
for m in [m1, m2]:
measurements_rad = []
measurements_deg = []
for attr, transformation in zip(attrs, trans):
l, med, u = np.percentile(getattr(m, attr).ravel(), [16, 50, 84])
measurements_rad.append(dict(attr=attr, lower=med-l, best=med, upper=med+l))
l, med, u = np.percentile(transformation(getattr(m, attr).ravel()), [16, 50, 84])
measurements_deg.append(dict(attr=attr, lower=med-l, best=med, upper=med+l))
spots_rad.append(measurements_rad)
spots_deg.append(measurements_deg)
from astropy.table import Table
for measurements_deg in spots_deg:
print(Table(rows=measurements_deg))
skip = 100
#plt.subplot(111, projection='hammer')
plt.scatter(m1.theta.ravel()[::skip], m1.phi.ravel()[::skip], s=4, alpha=0.2)
plt.scatter(m2.theta.ravel()[::skip], m2.phi.ravel()[::skip], s=4, alpha=0.2) | _____no_output_____ | MIT | example_vis_arctic.ipynb | bmorris3/stsp_osg_results |
Kontrol egiturakKontrol egiturek exekuzioaren fluxua adierazteko balio dute. Defektuz, ezekuzioa sekuentziala izango da, hau da, aginduak bata bestearen atzetik exekutatuko dira adieraziak dauden ordenean: | print(1)
print(2)
print(3)
print(4) | 1
2
3
4
| MIT | KonputaziorakoSarrera-MAT/Gardenkiak/Kontrol egiturak.ipynb | mpenagar/Irakaskuntza-Docencia-2019-2020 |
Utility functions | def layer_extraction(dcgan, file_names):
return dcgan.get_feature(FLAGS, file_names)
def maxpooling(disc):
kernel_stride_size = 4
maxpooling = [
tf.nn.max_pool(disc[i],ksize=[1,2**(4-i),2**(4-i),1],
strides=[1,2**(4-i),2**(4-i),1],padding='SAME')
for i in range(4)
]
# tf.global_variables_initializer().run()
maxpool_result = sess.run(maxpooling)
# for idx in range(4):
# print(idx, maxpool_result[idx].shape)
return maxpool_result
def flatten(disc):
flatten = [
tf.reshape(disc[i],[64, -1])
for i in range(4)
]
# tf.global_variables_initializer().run()
flatten_result = sess.run(flatten)
return flatten_result
def concat(disc):
concat = tf.concat(disc,1)
# tf.global_variables_initializer().run()
concat_result = sess.run(concat)
return concat_result
def feature_ext_GAN(file_names):
ret = layer_extraction(dcgan, file_names)
ret = maxpooling(ret)
ret = flatten(ret)
ret = concat(ret)
return ret
| _____no_output_____ | MIT | .ipynb_checkpoints/20170622 Experiment-checkpoint.ipynb | JustWon/DCGAN-Experiment |
Integration | for term in range(11,15):
print('%d ~ %d' % (50*term,50*(term+1)))
disc_list = []
batch_list = []
file_names = []
for idx in range(50*term,50*(term+1)):
patch_path ="/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/Image Data/City Centre/patches/#300/"
data = sorted(glob("%s/%04d/*.jpg" % (patch_path, idx)))
# patch_path ="/media/dongwonshin/Ubuntu Data/Datasets/Places365/Large_images/val_large/patches"
# data = glob("%s/Places365_val_%08d/*.jpg" % (patch_path, idx))
file_names.append(data)
file_names=np.concatenate(file_names)
print('total:',len(file_names))
# print(file_names)
for idx in range(0, len(file_names)-64,64):
batch_files = file_names[idx: idx+64]
disc = feature_ext_GAN(batch_files)
disc_list.append(disc)
batch_list.append(batch_files)
sys.stdout.write('.')
final_disc_list = np.concatenate(disc_list)
final_batch_list = np.concatenate(batch_list)
X = np.array(final_disc_list)
pca = PCA(n_components = 128)
pca.fit(X)
final_disc_list = pca.transform(X)
for idx, name in enumerate(final_batch_list):
# output_filename = '/media/dongwonshin/Ubuntu Data/Datasets/Places365/Large_images/val_large/descs/128dim/' + (name.split('/')[-2])+'.desc'
output_filename = '/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/Image Data/City Centre/descs/128dim/' + (name.split('/')[-2])+'.desc'
with open(output_filename,'at') as fp:
for v in final_disc_list[idx]:
fp.write('%f ' % v)
fp.write('\n')
desc_path ="/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/Image Data/City Centre/descs"
desc_name = glob("%s/*.desc" % (desc_path))
desc_name.sort()
for i, d in enumerate(desc_name):
if (i+1 != int(d[77:81])):
print(i+1)
break
| 1020
| MIT | .ipynb_checkpoints/20170622 Experiment-checkpoint.ipynb | JustWon/DCGAN-Experiment |
Descriptor Save | for idx, name in enumerate(final_batch_list):
output_filename = '/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/Image Data/City Centre/descs/' + (name.split('/')[-2])+'.desc'
with open(output_filename,'at') as fp:
for v in final_disc_list[idx]:
fp.write('%f ' % v)
fp.write('\n') | _____no_output_____ | MIT | .ipynb_checkpoints/20170622 Experiment-checkpoint.ipynb | JustWon/DCGAN-Experiment |
Result Analysis | # import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
SURF_result_text = '/home/dongwonshin/Desktop/LC_text_results/20170622_SURF_result_2.txt'
DCGAN_result_text = '/home/dongwonshin/Desktop/LC_text_results/20170622_DCGAN_result_128dim_2.txt'
how_many = 199
with open(SURF_result_text) as fp:
SURF_current_idx = []
SURF_most_related_idx = []
lines = fp.readlines()
for line in lines:
ele = line.strip().split(',')
SURF_current_idx.append(ele[0].split('=')[1])
SURF_most_related_idx.append(ele[2].split('=')[1])
with open(DCGAN_result_text) as fp:
DCGAN_current_idx = []
DCGAN_most_related_idx = []
lines = fp.readlines()
for line in lines:
ele = line.strip().split(',')
DCGAN_current_idx.append(ele[0].split('=')[1])
DCGAN_most_related_idx.append(ele[2].split('=')[1])
cnt = 0
LC_cs_cnt = 0
LC_cd_cnt = 0
for c, s, d in zip(SURF_current_idx, SURF_most_related_idx, DCGAN_most_related_idx):
gps_c = np.array(GPS_info_list[int(c)])
gps_s = np.array(GPS_info_list[int(s)])
gps_d = np.array(GPS_info_list[int(d)])
gps_cs = np.linalg.norm(gps_c-gps_s)
gps_cd = np.linalg.norm(gps_c-gps_d)
threshold = 5
if (gps_cs < threshold):
LC_cs = 'true'
LC_cs_cnt += 1
else:
LC_cs = 'false'
if (gps_cd < threshold):
LC_cd = 'true'
LC_cd_cnt += 1
else:
LC_cd = 'false'
# print('%4d' % int(c), gps_c)
# print('%4d' % int(s), gps_s, gps_cs, LC_cs)
# print('%4d' % int(d), gps_d, gps_cd, LC_cd)
# print()
# cur_path = '/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/Image Data/City Centre/images/%04d.jpg' % int(c)
# surf_path = '/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/Image Data/City Centre/images/%04d.jpg' % int(s)
# dcgan_path = '/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/Image Data/City Centre/images/%04d.jpg' % int(d)
# print(cur_path)
# print(surf_path)
# print(dcgan_path)
# cur_img = mpimg.imread(cur_path)
# surf_img = mpimg.imread(surf_path)
# dcgan_img = mpimg.imread(dcgan_path)
# one_img = np.hstack([cur_img, surf_img, dcgan_img])
# plt.imshow(one_img)
# plt.show()
if (cnt > how_many):
break
else:
cnt += 1
print('LC_cs_cnt = %d, LC_cd_cnt = %d' % (LC_cs_cnt, LC_cd_cnt))
| LC_cs_cnt = 82, LC_cd_cnt = 150
| MIT | .ipynb_checkpoints/20170622 Experiment-checkpoint.ipynb | JustWon/DCGAN-Experiment |
Loop Closure GroundTruth Text Handling | LC_corr_list = []
with open('/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/GroundTruth Text/CityCentreGroundTruth.txt') as fp:
row = 1
for line in fp:
row_ele = line.strip().split(',')
if ('1' in row_ele):
col = 1
for r in row_ele:
if (r == '1'):
# print('(row, col) (%d, %d)' % (row, col))
LC_corr_list.append([row,col])
col+=1
row += 1
else:
print('eof')
GPS_info_list = [[0,0]] # dummy for a start index 1
with open('/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/GroundTruth Text/CityCentreGPSData.txt') as fp:
for line in fp:
GPS_info_list.append(
[float(line.strip().split(' ')[1]) , float(line.strip().split(' ')[2])]
)
else:
print('eof')
def isOdd(val):
return not (val%2==0)
def isEven(val):
return (val%2==0)
for i, corr in enumerate(LC_corr_list):
if (isOdd(corr[0]) and isEven(corr[1])):
continue
if (isEven(corr[0]) and isOdd(corr[1])):
continue
img_i_path = ('/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/Image Data/City Centre/images/%04d.jpg' % corr[0])
img_j_path = ('/media/dongwonshin/Ubuntu Data/Datasets/FAB-MAP/Image Data/City Centre/images/%04d.jpg' % corr[1])
print(corr[0], GPS_info_list[corr[0]])
print(corr[1], GPS_info_list[corr[1]])
img_i = mpimg.imread(img_i_path)
img_j = mpimg.imread(img_j_path)
merge_img = np.hstack([img_i, img_j])
plt.imshow(merge_img)
plt.show()
if i > 10:
break | 1353 [201.13763, -174.712228]
305 [196.393236, -168.938331]
| MIT | .ipynb_checkpoints/20170622 Experiment-checkpoint.ipynb | JustWon/DCGAN-Experiment |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.