markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Similar Documents | sim = np.matmul(W, np.transpose(W))
print(sim.shape)
def similar_docs(filename, sim, topn):
doc_id = int(filename.split(".")[0])
row = sim[doc_id, :]
target_docs = np.argsort(-row)[0:topn].tolist()
scores = row[target_docs].tolist()
target_filenames = ["{:d}.txt".format(x) for x in target_docs]
return target_filenames, scores
filename2title = {}
with open(PAPERS_METADATA, "r") as f:
for line in f:
if line.startswith("#"):
continue
cols = line.strip().split("\t")
filename2title["{:s}.txt".format(cols[0])] = cols[2]
source_filename = "1032.txt"
top_n = 10
target_filenames, scores = similar_docs(source_filename, sim, top_n)
print("Source: {:s}".format(filename2title[source_filename]))
print("--- top {:d} similar docs ---".format(top_n))
for target_filename, score in zip(target_filenames, scores):
if target_filename == source_filename:
continue
print("({:.5f}) {:s}".format(score, filename2title[target_filename])) | Source: Forward-backward retraining of recurrent neural networks
--- top 10 similar docs ---
(0.05010) Context-Dependent Multiple Distribution Phonetic Modeling with MLPs
(0.04715) Is Learning The n-th Thing Any Easier Than Learning The First?
(0.04123) Learning Statistically Neutral Tasks without Expert Guidance
(0.04110) Combining Visual and Acoustic Speech Signals with a Neural Network Improves Intelligibility
(0.04087) The Ni1000: High Speed Parallel VLSI for Implementing Multilayer Perceptrons
(0.04038) Subset Selection and Summarization in Sequential Data
(0.04003) Back Propagation is Sensitive to Initial Conditions
(0.03939) Semi-Supervised Multitask Learning
(0.03862) SoundNet: Learning Sound Representations from Unlabeled Video
| Apache-2.0 | notebooks/19-content-recommender.ipynb | sujitpal/content-engineering-tutorial |
Suggesting Documents based on Read CollectionWe consider an arbitary set of documents that we know a user has read or liked or marked somehow, and we want to recommend other documents that he may like.To do this, we compute the average feature among these documents (starting from the sparse features) convert it to a average dense feature vector, then find the most similar compared to that one. | collection_size = np.random.randint(3, high=10, size=1)[0]
collection_ids = np.random.randint(0, high=num_docs+1, size=collection_size)
feat_vec = np.zeros((1, 11992))
for collection_id in collection_ids:
feat_vec += X[collection_id, :]
feat_vec /= collection_size
y = model.transform(feat_vec)
doc_sims = np.matmul(W, np.transpose(y)).squeeze(axis=1)
target_ids = np.argsort(-doc_sims)[0:top_n]
scores = doc_sims[target_ids]
print("--- Source collection ---")
for collection_id in collection_ids:
print("{:s}".format(filename2title["{:d}.txt".format(collection_id)]))
print("--- Recommendations ---")
for target_id, score in zip(target_ids, scores):
print("({:.5f}) {:s}".format(score, filename2title["{:d}.txt".format(target_id)])) | violation: 1.0
violation: 0.23129634545431624
violation: 0.03209572604136983
violation: 0.007400997221153011
violation: 0.0012999049199094925
violation: 0.0001959522250959198
violation: 4.179248920879007e-05
Converged at iteration 7
--- Source collection ---
A Generic Approach for Identification of Event Related Brain Potentials via a Competitive Neural Network Structure
Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions
Learning Trajectory Preferences for Manipulators via Iterative Improvement
Statistical Modeling of Cell Assemblies Activities in Associative Cortex of Behaving Monkeys
Learning to Traverse Image Manifolds
--- Recommendations ---
(0.06628) Fast Second Order Stochastic Backpropagation for Variational Inference
(0.06128) Scalable Model Selection for Belief Networks
(0.05793) Large Margin Discriminant Dimensionality Reduction in Prediction Space
(0.05643) Efficient Globally Convergent Stochastic Optimization for Canonical Correlation Analysis
(0.05629) Recognizing Activities by Attribute Dynamics
(0.05622) Efficient Match Kernel between Sets of Features for Visual Recognition
(0.05565) Learning Wake-Sleep Recurrent Attention Models
(0.05466) Boosting Density Estimation
(0.05422) Sparse deep belief net model for visual area V2
(0.05350) Cluster Kernels for Semi-Supervised Learning
| Apache-2.0 | notebooks/19-content-recommender.ipynb | sujitpal/content-engineering-tutorial |
Welcome to VapourSynth in Colab!Basic usage instructions: run the setup script, and run all the tabs in the "processing" script for example output.For links to instructions, tutorials, and help, see https://github.com/AlphaAtlas/VapourSynthColab Init | #@title Check GPU
#@markdown Run this to connect to a Colab Instance, and see what GPU Google gave you.
gpu = !nvidia-smi --query-gpu=gpu_name --format=csv
print(gpu[1])
print("The Tesla T4 and P100 are fast and support hardware encoding. The K80 and P4 are slower.")
print("Sometimes resetting the instance in the 'runtime' tab will give you a different GPU.")
!apt-get install python3-distutils
!apt-get install python3-apt
!wget https://www.python.org/ftp/python/3.6.3/Python-3.6.3.tgz
!tar -xvf Python-3.6.3.tgz
%cd Python-3.6.3
!sudo ./configure --enable-optimizations
%cd /content/
!rm -rfv "./Python-3.6.3"
!rm -rfv "./Python-3.6.3.tgz"
!wget https://bootstrap.pypa.io/get-pip.py
!sudo python3.6 get-pip.py
!!rm -rfv "./get-pip.py"
!pip install torch
!pip install cupy-cuda101
%cd /content/
!wget http://lliurex.net/bionic/pool/universe/f/ffmpeg/ffmpeg_3.4.2-2_amd64.deb
!sudo apt install ./ffmpeg_3.4.2-2_amd64.deb
!rm ./ffmpeg_3.4.2-2_amd64.deb
#@title Setup {display-mode: "form"}
#@markdown Run this to install VapourSynth, VapourSynth plugins and scripts, as well as some example upscaling models.
#NOTE: running this more than once may or may not work.
#The buggy console output is due to the threaded installing
#Currently TPU support is broken and incomplete, but it isn't particularly useful since it doesn't support opencl anyway
#Init
import os, sys, shutil, tempfile
import collections
from datetime import datetime, timedelta
import requests
import threading
import ipywidgets as widgets
from IPython import display
import PIL
from google.colab import files
import time
%cd /
#Function defs
#---------------------------------------------------------
#Like shutil.copytree(), but doesn't complain about existing directories
#Note this is fixed in newer version of Python 3
def copytree(src, dst, symlinks=False, ignore=None):
for item in os.listdir(src):
s = os.path.join(src, item)
d = os.path.join(dst, item)
if os.path.isdir(s):
shutil.copytree(s, d, symlinks, ignore)
else:
shutil.copy2(s, d)
#Download and extract the .py scripts from the VapourSynth fatpack
def download_fatpack_scripts():
%cd /
print("Downloading VS FatPack Scripts...")
dlurl = r"https://github.com/theChaosCoder/vapoursynth-portable-FATPACK/releases/download/r3/VapourSynth64Portable_2019_11_02.7z"
with tempfile.TemporaryDirectory() as t:
dpath = os.path.join(t, "VapourSynth64Portable_2019_11_02.7z")
os.chdir(t)
!wget {dlurl}
%cd /
!7z x -o{t} {dpath}
scriptsd = os.path.abspath(os.path.join(t, "VapourSynth64Portable", "Scripts"))
s = os.path.normpath("VapourSynthImports")
os.makedirs(s, exist_ok = True)
copytree(scriptsd, s)
sys.path.append(s)
#Get some additional scripts.
!wget -O /VapourSynthImports/muvsfunc_numpy.py https://raw.githubusercontent.com/WolframRhodium/muvsfunc/master/Collections/muvsfunc_numpy.py
!wget -O /VapourSynthImports/edi_rpow2.py https://gist.githubusercontent.com/YamashitaRen/020c497524e794779d9c/raw/2a20385e50804f8b24f2a2479e2c0f3c335d4853/edi_rpow2.py
!wget -O /VapourSynthImports/BMToolkit.py https://raw.githubusercontent.com/IFeelBloated/BlockMatchingToolkit/master/BMToolkit.py
if accelerator == "CUDA":
!wget -O /VapourSynthImports/Alpha_CuPy.py https://raw.githubusercontent.com/AlphaAtlas/VapourSynth-Super-Resolution-Helper/master/Scripts/Alpha_CuPy.py
!wget -O /VapourSynthImports/dpid.cu https://raw.githubusercontent.com/WolframRhodium/muvsfunc/master/Collections/examples/Dpid_cupy/dpid.cu
!wget -O /VapourSynthImports/bilateral.cu https://raw.githubusercontent.com/WolframRhodium/muvsfunc/master/Collections/examples/BilateralGPU_cupy/bilateral.cu
#Get an example model:
import gdown
gdown.download(r"https://drive.google.com/uc?id=1KToK9mOz05wgxeMaWj9XFLOE4cnvo40D", "/content/4X_Box.pth", quiet=False)
def getdep1():
%cd /
#Install apt-fast, for faster installing
!/bin/bash -c "$(curl -sL https://git.io/vokNn)"
#Get some basic dependancies
!apt-fast install -y -q -q subversion davfs2 p7zip-full p7zip-rar ninja-build
#Get VapourSynth and ImageMagick built just for a colab environment
def getvs():
%cd /
#%cd var/cache/apt/archives
#Artifacts hosted on bintray. If they fail to install, they can be built from source.
!curl -L "https://github.com/03stevensmi/VapourSynthColab/raw/master/imagemagick_7.0.9-8-1_amd64.deb" -o /var/cache/apt/archives/imagemagick.deb
!dpkg -i /var/cache/apt/archives/imagemagick.deb
!ldconfig /usr/local/lib
!curl -L "https://github.com/03stevensmi/VapourSynthColab/raw/master/vapoursynth_48-1_amd64.deb" -o /var/cache/apt/archives/vapoursynth.deb
!dpkg -i /var/cache/apt/archives/vapoursynth.deb
!ldconfig /usr/local/lib
#%cd /
def getvsplugins():
%cd /
#Allow unauthenticated sources
if not os.path.isfile("/etc/apt/apt.conf.d/99myown"):
with open("/etc/apt/apt.conf.d/99myown", "w+") as f:
f.write(r'APT::Get::AllowUnauthenticated "true";')
sources = "/etc/apt/sources.list"
#Backup original apt sources file, just in case
with tempfile.TemporaryDirectory() as t:
tsources = os.path.join(t, os.path.basename(sources))
shutil.copy(sources, tsources)
#Add deb-multimedia repo
#Because building dozens of VS plugins is not fun, and takes forever
with open(sources, "a+") as f:
deb = "deb https://www.deb-multimedia.org sid main non-free\n"
if not "deb-multimedia" in f.read():
f.write(deb)
with open(sources, "a+") as f:
#Temporarily use Debian unstable for some required dependencies
if not "ftp.us.debian.org" in f.read():
f.write("deb http://ftp.us.debian.org/debian/ sid main\n")
!add-apt-repository -y ppa:deadsnakes/ppa
!apt-fast update -oAcquire::AllowInsecureRepositories=true
!apt-fast install -y --allow-unauthenticated deb-multimedia-keyring
!apt-fast update
#Parse plugins to install
out = !apt-cache search vapoursynth
vspackages = ""
#exclude packages with these strings in the name
exclude = ["waifu", "wobbly", "editor", "dctfilter", "vapoursynth-dev", "vapoursynth-doc"]
for line in out:
p = line.split(" - ")[0].strip()
if not any(x in p for x in exclude) and "vapoursynth" in p and p != "vapoursynth":
vspackages = vspackages + p + " "
print(vspackages)
#Install VS plugins and a newer ffmpeg build
!apt-fast install -y --allow-unauthenticated --no-install-recommends ffmpeg youtube-dl libzimg-dev {vspackages} libfftw3-3 libfftw3-double3 libfftw3-dev libfftw3-bin libfftw3-double3 libfftw3-single3 checkinstall
#Get a tiny example video
!youtube-dl -o /content/enhance.webm -f 278 https://www.youtube.com/watch?v=I_8ZH1Ggjk0
#Restore original sources
os.remove(sources)
shutil.copy(tsources, sources)
#Congrats! Apt may or may not be borked.
copytree("/usr/lib/x86_64-linux-gnu/vapoursynth", "/usr/local/lib/vapoursynth")
!ldconfig /usr/local/lib/vapoursynth
#Install vapoursynth python modules
def getpythonstuff():
%cd /
!python3.6 -m pip install vapoursynth meson opencv-python
def cudastuff():
%cd /
out = !nvcc --version
cudaver = (str(out).split("Cuda compilation tools, release ")[1].split(", ")[0].replace(".", ""))
#Note this download sometimes times out
!python3.6 -m pip install mxnet-cu{cudaver} #cupy-cuda{cudaver}
!pip install git+https://github.com/AlphaAtlas/VSGAN.git
#Mxnet stuff
modelurl = "https://github.com/WolframRhodium/Super-Resolution-Zoo/trunk"
if os.path.isdir("/NeuralNetworks"):
!svn update --set-depth immediates /NeuralNetworks
!svn update --set-depth infinity /NeuralNetworks/ARAN
else:
!svn checkout --depth immediates {modelurl} /NeuralNetworks
def makesrcd(name):
%cd /
srpath = os.path.abspath(os.path.join("/src", name))
os.makedirs(srpath, exist_ok = False)
%cd {srpath}
def mesongit(giturl):
p = os.path.basename(giturl)[:-4]
makesrcd(p)
!git clone {giturl}
%cd {p}
!meson build
!ninja -C build
!ninja -C build install
#Taken from https://stackoverflow.com/a/31614591
#Allows exceptions to be caught from threads
from threading import Thread
class PropagatingThread(Thread):
def run(self):
self.exc = None
try:
if hasattr(self, '_Thread__target'):
# Thread uses name mangling prior to Python 3.
self.ret = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
else:
self.ret = self._target(*self._args, **self._kwargs)
except BaseException as e:
self.exc = e
def join(self):
super(PropagatingThread, self).join()
if self.exc:
raise self.exc
return self.ret
#Interpolation experiment
#%cd /
#os.makedirs("/videotools")
#%cd /videotools
#!git clone https://github.com/sniklaus/pytorch-sepconv.git
#%cd /
#Function for testing vapoursynth scripts
#Takes the path of the script, and a boolean for generating a test frame.
#-----------------------------------------------------------
#Init functions are threaded for speed
#"PropagatingThread" class is used to return exceptions from threads, otherwise they fail silently
t1 = PropagatingThread(target = getdep1)
t1.start()
print("apt init thread started")
t2 = PropagatingThread(target = download_fatpack_scripts)
t2.start()
print("VS script downloader thread started.")
#Get rid of memory usage log spam from MXnet
os.environ["TCMALLOC_LARGE_ALLOC_REPORT_THRESHOLD"] = "107374182400"
#Check for an accelerator
accelerator = None
gpu = None
if 'COLAB_TPU_ADDR' in os.environ:
#WIP
raise Exception("TPUs are (currently) not supported! Please use a GPU or CPU instance.")
else:
#Check for Nvidia GPU, and identify it
out = !command -v nvidia-smi
if out != []:
out = !nvidia-smi
for l in out:
if "Driver Version" in l:
accelerator = "CUDA"
print("Nvidia GPU detected:")
gpu = !nvidia-smi --query-gpu=gpu_name --format=csv
gpu = gpu[1]
#print("Tesla K80 < Tesla T4 < Tesla P100")
break
if accelerator == None:
print("Warning: No Accelerator Detected!")
t1.join()
print("Apt init thread done.")
t1 = PropagatingThread(target = getvs)
t1.start()
print("Vapoursynth/Imagemagick downloader thread started.")
t1.join()
print("Vapoursynth/Imagemagick installed")
t3 = PropagatingThread(target = getpythonstuff)
t3.start()
print("Pip thread started")
t1 = PropagatingThread(target = getvsplugins)
t1.start()
print("VS plugin downloader thread started.")
t3.join()
print("pip thread done")
if accelerator == "TPU":
#WIP!
pass
elif accelerator == "CUDA":
t3 = PropagatingThread(target = cudastuff)
t3.start()
print("CUDA pip thread started.")
else:
pass
t2.join()
print("VS script downloader thread done.")
t3.join()
print("CUDA pip thread done.")
t1.join()
print("VS plugin thread done.")
#Build some more plugins(s)
#TODO: Build without changing working directory, or try the multiprocessing module, so building can run asynchronously
print("Building additional plugins")
mesongit(r"https://github.com/HomeOfVapourSynthEvolution/VapourSynth-DCTFilter.git")
mesongit(r"https://github.com/HomeOfVapourSynthEvolution/VapourSynth-TTempSmooth.git")
googpath = None
%cd /
Clear_Console_Output_When_Done = True #@param {type:"boolean"}
if Clear_Console_Output_When_Done:
display.clear_output()
#if gpu is not None:
# print(gpu[1])
# print("A Tesla T4 or P100 is significantly faster than a K80")
# print("And the K80 doesn't support hardware encoding.")
#@title Mount Google Drive
#@markdown Highly recommended!
import os
%cd /
#Check if Google Drive is mounted, and mount if its not.
googpath = os.path.abspath(os.path.join("gdrive", "MyDrive"))
if not os.path.isdir(googpath):
from google.colab import drive
drive.mount('/gdrive', force_remount=True)
#@title Mount a Nextcloud Drive
import os
nextcloud = "/nextcloud"
os.makedirs(nextcloud, exist_ok=True)
Nextcloud_URL = "https://us.hostiso.cloud/remote.php/webdav/" #@param {type:"string"}
%cd /
if os.path.isfile("/etc/fstab"):
os.remove("/etc/fstab")
with open("/etc/fstab" , "a") as f:
f.write(Nextcloud_URL + " " + nextcloud + " davfs user,rw,auto 0 0")
!mount {nextcloud} | _____no_output_____ | MIT | VapourSynthColab.ipynb | 03stevensmi/VapourSynthColab |
Processing | %%writefile /content/autogenerated.vpy
#This is the Vapoursynth Script!
#Running this cell will write the code in this cell to disk, for VSPipe to read.
#Later cells will check to see if it executes.
#Edit it just like a regular python VS script.
#Search for functions and function reference in http://vsdb.top/, or browse the "VapourSynthImports" folder.
#Import functions
import sys, os, cv2
sys.path.append('/VapourSynthImports')
import vapoursynth as vs
import vsgan as VSGAN
import mvsfunc as mvf
#import muvsfunc as muf
#import fvsfunc as fvf
import havsfunc as haf
import Alpha_CuPy as ape
import muvsfunc_numpy as mufnp
#import BMToolkit as bm
import G41Fun as G41
#import vsutil as util
#import edi_rpow2 as edi
#import kagefunc as kage
#import lostfunc as lost
#import vsTAAmbk as taa
#import xvs as xvs
from vapoursynth import core
#Set RAM cache size, in MB
core.max_cache_size = 10500
#Get Video(s) or Image(s). ffms2 (ffmpeg) or imwri (imagemagick) will read just about anything.
#Lsmash sometimes works if ffms2 failes, d2v reads mpeg2 files
clip = core.ffms2.Source(r"/content/enhance.webm")
#clip = core.lsmas.LWLibavSource("/tmp/%d.png")
#clip = core.imwri.Read("testimage.tiff")
#Store source for previewing
src = clip
#Convert to 16 bit YUV for preprocessing
#clip = core.resize.Spline36(clip, format = vs.YUV444P16)
#Deinterlace
#clip = G41.QTGMC(clip, Preset='Medium')
#Mild deblocking
#clip = fvf.AutoDeblock(clip)
#Convert to floating point RGB
clip = mvf.ToRGB(clip, depth = 32)
#Spatio-temportal GPU denoiser. https://github.com/Khanattila/KNLMeansCL/wiki/Filter-description
clip = core.knlm.KNLMeansCL(clip, a = 8, d = 4, h = 1.4)
preupscale = clip
#Run ESRGAN model. See https://upscale.wiki/wiki/Model_Database
vsgan_device = VSGAN.VSGAN()
vsgan_device.load_model(model=r"/content/4X_Box.pth", scale=4)
clip = vsgan_device.run(clip=clip, chunk = False, pad = 16)
clip = core.knlm.KNLMeansCL(clip, a = 7, d = 3, h = 1.4)
#Run MXNet model. See the "MXNet" cell.
#Tensorflow models are also supported!
#sr_args = dict(model_filename=r'/NeuralNetworks/ARAN/aran_c0_s1_x4', up_scale=4, device_id=0, block_w=256, block_h=128, is_rgb_model=True, pad=None, crop=None, pre_upscale=False)
#clip = mufnp.super_resolution(clip, **sr_args)
#HQ downscale on the GPU with dpid
#clip = ape.GPU_Downscale(clip, width = 3840, height = 2160)
#Convert back to YUV 444 format/Rec 709 colorspace
clip = core.resize.Spline36(clip, format = vs.YUV444P16, matrix_s = "709")
#Strong temporal denoiser and stabilizer with the LR as a motion reference clip, for stabilizing.
prefilter = core.resize.Spline36(preupscale, format = clip.format, width = clip.width, height = clip.height, matrix_s = "709")
clip = G41.SMDegrain(clip, tr=3, RefineMotion=True, pel = 1, prefilter = prefilter)
#Another CPU denoiser/stabilizer. "very high" is very slow.
#clip = haf.MCTemporalDenoise(clip, settings = "very high", useTTmpSm = True, maxr=4, stabilize = True)
#Stabilized Anti Aliasing, with some GPU acceleration
#clip = taa.TAAmbk(clip, opencl=True, stabilize = 3)
#Example sharpeners that work well on high-res images
#Masks or mvf.limitfilter are good ways to keep artifacts in check
#clip = core.warp.AWarpSharp2(clip)
#clip = G41.NonlinUSM(clip, z=3, sstr=0.28, rad=9, power=1)
#High quality, strong debanding
#clip = fvf.GradFun3(clip, smode = 2)
#Convert back to 8 bit YUV420 for output.
clip = core.resize.Spline36(clip, format = vs.YUV420P8, matrix_s = "709", dither_type = "error_diffusion")
#Interpolate to double the source framerate
#super = core.mv.Super(inter)
#backward_vectors = core.mv.Analyse(super, isb = True, overlap=4, search = 3)
#forward_vectors = core.mv.Analyse(super, isb = False, overlap=4, search = 3)
#inter = core.mv.FlowFPS(inter, super, backward_vectors, forward_vectors, num=0, den=0)
#Stack the source on top of the processed clip for comparison
src = core.resize.Point(src, width = clip.width, height = clip.height, format = clip.format)
#clip = core.std.StackVertical([clip, src])
#Alternatively, interleave the source and slow down the framerate for easy comparison.
clip = core.std.Interleave([clip, src])
clip = core.std.AssumeFPS(clip, fpsnum = 2)
#clip = core.std.SelectEvery(clip=clip, cycle=48, offsets=[0,1])
clip.set_output()
#@title Preview Options
#@markdown Run this cell to check the .vpy script, and set preview options.
#@markdown * Software encoding is relatively slow on colab's single CPU core, but returns a smaller video.
#@markdown * Hardware encoding doesn't work on older GPUs or a TPU, but can be faster.
#@markdown * Sometimes video previews don't work. Chrome seems more reliable than Firefox, but its video player doesn't support scrubbing. Alternatively, you can download the preview in the "/content" folder with the Colab UI.
#@markdown * HEVC support in browsers is iffy.
#@markdown * PNG previews are more reliable, but less optimal.
#@markdown * In video previews, you can interleave the source and processed clips and change the framerate for easy comparisons.
#@markdown ***
#TODO: Make vpy file path editable
vpyscript = "/content/autogenerated.vpy"
#@markdown Use hardware encoding.
Hardware_Encoding = True #@param {type:"boolean"}
#@markdown Encode preview as lossless or high quality lossy video
Lossless = False #@param {type:"boolean"}
#@markdown Use HEVC instead of AVC for preview. Experimental.
HEVC = False #@param {type:"boolean"}
#@markdown Generate a single PNG instead of a video.
Write_PNG = False #@param {type:"boolean"}
#@markdown Don't display any video preview, just write it to /content
Display_Video = False #@param {type:"boolean"}
#@markdown Number of preview frames to generate
preview_frames = 120 #@param {type:"integer"}
#Check script with test frame (for debugging)
Test_Frame = False
from IPython.display import clear_output
import ipywidgets as widgets
from pprint import pprint
def checkscript(vpyfile, checkoutput):
#Clear the preview cache folder, as the script could have changed
quotepath = r'"' + vpyfile + r'"'
print("Testing script...")
if checkoutput:
#See if the script will really output a frame
test = !vspipe -y -s 0 -e 0 {quotepath} .
#Parse the script, and return information about it.
rawinfo = !vspipe -i {quotepath} -
#Store clip properties as a dict
#I really need to learn regex...
clipinfo = eval(r"{" + str(rawinfo)[1:].replace(r"\n", r"','").replace(r": ", r"': '")[:-1] + r"}")
!clear
if not isinstance(clipinfo, dict):
print(rawinfo)
raise Exception("Error parsing VapourSynth script!")
#print("Script output properties: ")
#!echo {clipinfo}
return clipinfo, rawinfo, quotepath
#Make a preview button, and a frame slider
#Note that the slider won't appear with single frame scripts
%cd /
#display.clear_output()
!clear
clipinfo, rawinfo, quotepath = checkscript(vpyscript, Test_Frame)
frameslider = None
drawslider = int(clipinfo["Frames"]) > 1
if drawslider:
frameslider = widgets.IntSlider(value=0, max=(int(clipinfo["Frames"]) - 1), layout=widgets.Layout(width='100%', height='150%'))
else:
preview_frames = 1
fv = None
if not(preview_frames > 0 and preview_frames <= int(clipinfo["Frames"])):
raise Exception("preview_frames must be a valid integer")
if drawslider:
fv = int(frameslider.value)
else:
fv = 0
encstr = ""
previewfile = r"/usr/local/share/jupyter/nbextensions/preview.mp4"
if os.path.isfile(previewfile):
os.remove(previewfile)
ev = min((int(fv + preview_frames - 1), int(clipinfo["Frames"])- 1))
enctup = (Hardware_Encoding, HEVC, Lossless)
if enctup == (True, True, True):
encstr = r"-c:v hevc_nvenc -profile main10 -preset lossless -spatial_aq:v 1 -aq-strength 15 "
elif enctup == (True, True, False):
encstr = r"-c:v hevc_nvenc -pix_fmt yuv420p10le -preset:v medium -profile:v main10 -spatial_aq:v 1 -aq-strength 15 -rc:v constqp -qp:v 9"
elif enctup == (True, False, True):
encstr = r"-c:v h264_nvenc -preset lossless -profile high444p -spatial-aq 1 -aq-strength 15"
elif enctup == (False, True, True):
encstr = r"-c:v libx265 -pix_fmt yuv420p10le -preset slow -x265-params lossless=1"
elif enctup == (True, False, False):
encstr = r"-c:v h264_nvenc -pix_fmt yuv420p -preset:v medium -rc:v constqp -qp:v 10 -spatial-aq 1 -aq-strength 15"
elif enctup == (False, False, True):
encstr = r"-c:v libx264 -preset veryslow -crf 0"
elif enctup == (False, True, False):
encstr = r"-c:v libx265 -pix_fmt yuv420p10le -preset slow -crf 9"
elif enctup == (False, False, False):
encstr = r"-c:v libx264 -pix_fmt yuv420p -preset veryslow -crf 9"
else:
raise Exception("Invalid parameters!")
clear_output()
print(*rawinfo, sep = ' ')
print("Select the frame(s) you want to preview with the slider and 'preview frames', then run the next cell.")
display.display(frameslider)
#@title Generate Preview
import os, time
previewdisplay = r"""
<video controls autoplay>
<source src="/nbextensions/preview.mp4" type='video/mp4;"'>
Your browser does not support the video tag.
</video>
"""
previewpng = "/content/preview" + str(frameslider.value) + ".png"
if os.path.isfile(previewfile):
os.remove(previewfile)
if os.path.isfile(previewpng):
os.remove(previewpng)
frames = str(clipinfo["Frames"])
end = min(frameslider.value + preview_frames - 1, int(clipinfo["Frames"]) - 1)
if Write_PNG:
!vspipe -y -s {frameslider.value} -e {frameslider.value} /content/autogenerated.vpy - | ffmpeg -y -hide_banner -loglevel warning -i pipe: {previewpng}
if os.path.isfile(previewpng):
import PIL
display.display(PIL.Image.open(previewpng, mode='r'))
else:
raise Exception("Error generating preview!")
else:
out = !vspipe --progress -y -s {frameslider.value} -e {end} /content/autogenerated.vpy - | ffmpeg -y -hide_banner -progress pipe:1 -loglevel warning -i pipe: {encstr} {previewfile} | grep "fps"
if os.path.isfile(previewfile):
if os.path.isfile("/content/preview.mp4"):
os.remove("/content/preview.mp4")
!ln {previewfile} "/content/preview.mp4"
clear_output()
for temp in out:
if "Output" in temp:
print(temp)
if Display_Video:
display.display(display.HTML(previewdisplay))
else:
raise Exception("Error generating preview!") | _____no_output_____ | MIT | VapourSynthColab.ipynb | 03stevensmi/VapourSynthColab |
Scratch Space--- | #Do stuff here
#Example ffmpeg script:
!vspipe -y /content/autogenerated.vpy - | ffmpeg -i pipe: -c:v hevc_nvenc -profile:v main10 -preset lossless -spatial_aq:v 1 -aq-strength 15 "/gdrive/MyDrive/upscale.mkv"
#TODO: Figure out why vspipe's progress isn't showing up in colab. | _____no_output_____ | MIT | VapourSynthColab.ipynb | 03stevensmi/VapourSynthColab |
Extra Functions | #@title Build ImageMagick and VapourSynth for Colab
#@markdown VapourSynth needs to be built for Python 3.6, and Imagemagick needs to be built for the VapourSynth imwri plugin. The setup script pulls from bintray, but this cell will rebuild and reinstall them if those debs dont work.
#@markdown The built debs can be found in the "src" folder.
#Get some requirements for building
def getbuildstuff():
!apt-fast install software-properties-common autoconf automake libtool build-essential cython3 coreutils pkg-config
!python3.6 -m pip install tesseract cython
#Build imagemagick, for imwri and local image manipulation, and create a deb
def buildmagick():
makesrcd("imagemagick")
!wget https://imagemagick.org/download/ImageMagick-7.0.9-8.tar.gz
!tar xvzf ImageMagick-7.0.9-8.tar.gz
%cd ImageMagick-7.0.9-8
!./configure --enable-hdri=yes --with-quantum-depth=32
!make -j 4 --quiet
!sudo checkinstall -D --fstrans=no --install=yes --default --pakdir=/src --pkgname=imagemagick --pkgversion="8:7.0.9-8"
!ldconfig /usr/local/lib
#Build vapoursynth for colab (python 3.6, Broadwell SIMD, etc.), and create a deb
def buildvs():
makesrcd("vapoursynth")
!wget https://github.com/vapoursynth/vapoursynth/archive/R48.tar.gz
!tar -xf R48.tar.gz
%cd vapoursynth-R48
!./autogen.sh
!./configure --enable-imwri
!make -j 4 --quiet
!sudo checkinstall -D --fstrans=no --install=yes --default --pakdir=/src --pkgname=vapoursynth --pkgversion=48
!ldconfig /usr/local/lib
getbuildstuff()
buildmagick()
buildvs()
#@title MXnet
#@markdown This cell will pull pretrained models from https://github.com/WolframRhodium/Super-Resolution-Zoo
#@markdown For usage examples, see [this](https://github.com/WolframRhodium/muvsfunc/blob/master/Collections/examples/super_resolution_mxnet.vpy)
#@markdown and [this](https://github.com/WolframRhodium/Super-Resolution-Zoo/wiki/Explanation-of-configurations-in-info.md)
#Note that there's no release for the mxnet C++ plugin, and I can't get it to build in colab, but the header pulls and installs mxnet and the numpy super resolution function
n = "ESRGAN" #@param {type:"string"}
!svn update --set-depth infinity NeuralNetworks/{n} | _____no_output_____ | MIT | VapourSynthColab.ipynb | 03stevensmi/VapourSynthColab |
Método de Runge-Kutta de 4to ordenClase: F1014B Modelación Computacional de Sistemas ElectromagnéticosAutor: Edoardo BucheliProfesor de Cátedra, Tec de Monterrey Campus Santa Fe IntroducciónEn esta sesión aprenderemos un método numérico para la solución de problemas de valor inicial con la siguiente forma,$$\frac{dy}{dx} = f(x,y)\qquad y(x_0) = y_0$$El método que estudiaremos se conoce como el método de Runge-Kutta desarrollado por los matemáticos alemanes Carl Runge y Wilhem Kutta.Este método es a su vez una versión más precisa del **Método de Euler** el cual estudiaremos rápidamente antes de pasar a Runge-Kutta. Método de EulerEl método de Euler se basa en un principio muy sencillo. Dado un campo de pendientes, creamos una trayectoria que se mueve conforme a la pendiente en un punto $(x,y)$ dado un movimiento horizontal $\Delta x = h$. Este procedimiento se visualiza en la siguiente imágen.En la imágen se aprecia de manera muy clara el error que ocurre al llevar a cabo la aproximación. Para mejorar la aproximación, podemos reducir el tamaño de $h$ como se muestra en la siguiente figura,En la imagen anterior se muestran aproximaciones con $h = 1$, $h = 0.2$ y $h = 0.05$.Aunque un valor para $h$ puede ser lo suficientemente chico para obtener un resultado, si el valor final de $x$ que estamos buscando es muy grande entonces nuestro algoritmo puede ser muy costoso computacionalmente. Es por ello que resulta necesario encontrar un método más eficiente. Ejercicio: Implementa el método de EulerAhora si empezaremos a programar, en esta sección necesitas implementar el Método de Euler.Para hacer eso definamos el método formalmente: Definición: Método de EulerPara el problema de valor inicial,$$\frac{dy}{dx} = f(x,y)\qquad y(x_0)=y_0$$El método de Euler con salto de tamaño $h$ consiste en aplicar la fórmula iterativa,$$y_{n+1} = y_n + h\cdot f(x_n,y_n)$$y$$x_{n+1} = x_n + h$$para calcular las aproximaciones $y_1,y_2,y_3,\dots$ de los valores reales $y(x_1),y(x_2),y(x_3),\dots$ que emanan de la solución exacta $y(x)$ Solución GuiadaEmpecemos por importar las librerías necesarias. | import numpy as np
import matplotlib.pyplot as plt | _____no_output_____ | MIT | Metodo de Runge-Kutta_Actividad.ipynb | ebucheli/F1014B |
Para la solución del problema, intentemos hacer una implementación similar a lo que haría un/a programador/a con más experiencia.Por lo tanto separaremos la solución en dos funciones. Una se encargará de calcular un paso del método de euler y la segunda utilizará esta función para repetir el proceso tantas veces sea necesario.Empecemos entonces por implementar la función `euler_step()` que se encargará de llevar a cabo un paso del método de Euler. | def euler_step(x_n,y_n,f,h):
"""
Calcula un paso del método de euler
Entradas:
x_n: int,float
El valor inicial de x en este paso
y_n: int, float
El valor inicial de y en este paso
f: función
Una función que represente f(x,y) en el problema de valor inicial.
h: int, float
El tamaño del salto de un paso al siguiente
Salida:
x_n_plus_1: float
El valor de x actualizado de acuerdo al método de euler
y_n_plus_1: float
El valor de y actualizado de acuerdo al método de euler
"""
# Empieza tu código aquí (alrededor de 2 líneas)
return x_n_plus_1, y_n_plus_1
| _____no_output_____ | MIT | Metodo de Runge-Kutta_Actividad.ipynb | ebucheli/F1014B |
Probemos tu función con el siguiente problema,$$\frac{dy}{dx} = x + \frac{1}{5}y \qquad y(0) = -3 $$Utilizando $h = 1$ | x_0 = 0
y_0 = -3
def f(x,y):
return x + (1/5)*y
h = 1
print(euler_step(x_0,y_0,f,h)) | _____no_output_____ | MIT | Metodo de Runge-Kutta_Actividad.ipynb | ebucheli/F1014B |
Tu resultado debería ser `(1,-3.6)` Ahora que tenemos una función que calcula un paso del método, implementemos la función `euler_method()` que use la función `euler_step()` para generar una lista de valores hasta una nueva variable `x_goal` | def euler_method(x_0,y_0,x_goal,f,h):
"""
Regresa una lista para aproximaciones de y con el método de euler hasta un cierto valor x_goal
Entradas:
x_n: int,float
El valor inicial de x
y_n: int, float
El valor inicial de y
x_goal: int,float
El valor hasta donde queremos calcular las aproximaciones del método de euler
f: función
Una función que represente f(x,y) en el problema de valor inicial.
h: int, float
El tamaño del salto de un paso al siguiente
Salida:
x_n_list: list
Una lista con los valores de 'x' desde x_0 hasta el valor x_{n+1} más cercano a x_goal que sea también menor
y_n_list: list
Una lista con las aproximaciones 'y' evaluadas con los valores de x desde x_0 hasta el valor x_n+1 más cercano a x_goal que sea también menor
"""
# Definimos x_n y y_n como los valores iniciales
x_n = x_0
y_n = y_0
# Crea las listas x_n_list y y_n_list por ahora solo contienen los valores iniciales
x_n_list = [x_0]
y_n_list = [y_0]
# Crea aquí un ciclo donde lleves a cabo el procedimiento tantas veces sea necesario,
# no olvides guardar cada valor que calcules en las listas 'x_n_list' y 'y_n_list'
# Aprox 4 líneas
return x_n_list,y_n_list
x_0 = 0
y_0 = -3
def f(x,y):
return x + (1/5)*y
h = 1
x_goal = 5
print(euler_method(x_0,y_0,x_goal,f,h)) | _____no_output_____ | MIT | Metodo de Runge-Kutta_Actividad.ipynb | ebucheli/F1014B |
La salida de la celda anterior debería ser:`([0, 1, 2, 3, 4, 5], [-3, -3.6, -3.3200000000000003, -1.9840000000000004, 0.6191999999999993, 4.743039999999999])` Un Método de Euler MejoradoUna manera sencilla de mejorar el método de Euler será obteniendo más de una pendiente y tomando el promedio de las pendientes para mejorar la predicción como se muestra en la siguiente figura.Vemos que una vez que generamos un nuevo punto $y(x_{n+1})$ podemos obtener la pendiente en este punto, tomar el promedio entre esta pendiente y la encontrada en $y(x_n)$ para encontrar una nueva predicción un poco mejor. Método de Runge-KuttaUtilizando el **Teorema Fundamental del Cálculo** podemos derivar la siguiente expresión,$$y(x_{n+1})-y(x_n) = \int_{x_n}^{x_{n+1}}y'(x)dx$$Y a su vez, por la **Ley de Simpson** de integración podemos aproximar esto como,$$y(x_{n+1})-y(x_n)\approx \frac{h}{6}\bigg[y'\big(x_n\big)+4y'\bigg(x_n+\frac{h}{2}\bigg)+y'\big(x_{n+1}\big)\bigg]$$Y por lo tanto podemos despejar $y_{n+1}$ de la siguiente forma,$$y(x_{n+1})\approx y(x_n) + \frac{h}{6}\bigg[y'\big(x_n\big)+2y'\bigg(x_n+\frac{h}{2}\bigg)+2y'\bigg(x_n+\frac{h}{2}\bigg)+y'\big(x_{n+1}\big)\bigg]$$Definiremos los términos dentro del paréntesis $y'\big(x_n\big)$, $y'\big(x_n+\frac{h}{2}\big)$, $y'\big(x_n+\frac{h}{2}\big)$ y $y'\big(x_{n+1}\big)$ como $k_1,k_2,k_3,k_4$ respectivamente de la siguiente forma, $k_1 = f(x_n,y_n)$Esto es la pendiente en $(x_n,y_n)$ misma pendiente que utilizaríamos en el Método de Euler original. $k_2 = f(x_n+\frac{1}{2}h,y_n+\frac{1}{2}hk_1)$Esto es la pendiente en el punto medio de $x_n$ y $x_{n+1}$ de acuerdo a la pendiente $k_1$ $k_3 = f(x_n+\frac{1}{2}h,y_n+\frac{1}{2}hk_2)$Esto es una correción de la pendiente del punto medio de $x_n$ y $x_{n+1}$ como se hace en el método de Euler corregido. $k_4 = f(x_{n+1},y_n+hk_3)$Esto es la pendiente del punto $(x_{n+1},y_{n+1})$ basado en la pendiente corregida $k_3$.Lo que finalmente nos lleva a la forma,$$y_{n+1}=y_n+\frac{h}{6}(k_1+2k_2+2k_3+k_4)$$Nota que se usan las cuatro pendientes calculadas de manera ponderada. Es decir que no es exactamente un promedio, sino que le damos un poco más de peso a ciertas pendientes. Específicamente a las de los puntos medios.Definamos entonces de manera formal el algoritmo. Definición: Método de Runge-KuttaPara el problema de valor inicial,$$\frac{dy}{dx} = f(x,y)\qquad y(x_0)=y_0$$El método de Runge-Kutta con salto de tamaño $h$ consiste en aplicar la fórmula iterativa,$$y_{n+1}=y_n+\frac{h}{6}(k_1+2k_2+2k_3+k_4)$$Dónde,* $k_1 = f(x_n,y_n)$* $k_2 = f(x_n+\frac{1}{2}h,y_n+\frac{1}{2}hk_1)$* $k_3 = f(x_n+\frac{1}{2}h,y_n+\frac{1}{2}hk_2)$* $k_4 = f(x_{n+1},y_n+hk_3)$para calcular las aproximaciones $y_1,y_2,y_3,\dots$ de los valores reales $y(x_1),y(x_2),y(x_3),\dots$ que emanan de la solución exacta $y(x)$ Ejercicio: Implementa el Método de Runge-KuttaAhora implementaremos el método de Runge-Kutta. Al igual que Euler convendrá hacerlo como una función que podamos llamar para implementar un paso combinada de una función que implemente un cierto número de iteraciones. Empecemos por definir la función `runge_kutta_step()` que llevará a cabo una iteración del método. | def runge_kutta_step(x_n,y_n,f,h):
"""
Calcula una iteración del método de Runge-Kutta de cuarto orden
Entradas:
x_n: int,float
El valor inicial de x en este paso
y_n: int, float
El valor inicial de y en este paso
f: función
Una función que represente f(x,y) en el problema de valor inicial.
h: int, float
El tamaño del salto de un paso al siguiente
Salida:
x_n_plus_1: float
El valor de x actualizado de acuerdo al método de euler
y_n_plus_1: float
El valor de y actualizado de acuerdo al método de euler
"""
# Calcula cada una de las pendientes k de acuerdo al método de Runge-Kutta
k_1 = #
k_2 = #
k_3 = #
k_4 = #
# Y ahora calcula y_n_plus_1 y x_n_plus_1
# Aprox 2 líneas
return x_n_plus_1, y_n_plus_1
def f(x,y):
return x + y
x_0 = 0
y_0 = 1
h = 0.5
x_1_test, y_1_test = runge_kutta_step(x_0,y_0,f,h)
print(f"x_1 = {x_1_test}\ny_1 = {y_1_test}") | _____no_output_____ | MIT | Metodo de Runge-Kutta_Actividad.ipynb | ebucheli/F1014B |
El resultado anterior debería ser:```x_1 = 0.5y_1 = 1.796875```Para terminar, implementemos la función `runge_kutta()` | def runge_kutta(x_0,y_0,x_goal,f,h):
"""
Regresa una lista para aproximaciones de y con el método de Runge-Kutta hasta un cierto valor x_goal
Entradas:
x_n: int,float
El valor inicial de x
y_n: int, float
El valor inicial de y
x_goal: int,float
El valor hasta donde queremos calcular las aproximaciones del método de euler
f: función
Una función que represente f(x,y) en el problema de valor inicial.
h: int, float
El tamaño del salto de un paso al siguiente
Salida:
x_n_list: list
Una lista con los valores de 'x' desde x_0 hasta el valor x_{n+1} más cercano a x_goal que sea también menor
y_n_list: list
Una lista con las aproximaciones 'y' evaluadas con los valores de x desde x_0 hasta el valor x_n+1 más cercano a x_goal que sea también menor
"""
# Definimos x_n y y_n como los valores iniciales
x_n = x_0
y_n = y_0
# Crea las listas x_n_list y y_n_list por ahora solo contienen los valores iniciales
x_n_list = [x_0]
y_n_list = [y_0]
# Crea aquí un ciclo donde lleves a cabo el procedimiento tantas veces sea necesario,
# no olvides guardar cada valor que calcules en las listas 'x_n_list' y 'y_n_list'
# Aprox 4 líneas
return x_n_list,y_n_list
def f(x,y):
return x + y
x_0 = 0
y_0 = 1
h = 0.1
x_goal = 1
x_list, y_list = runge_kutta(x_0,y_0,x_goal,f,h) | _____no_output_____ | MIT | Metodo de Runge-Kutta_Actividad.ipynb | ebucheli/F1014B |
Usemos una librería llamada `prettytable` para imprimir nuestro resultado. Si no la tienes instalada entonces la siguiente linea arrojaría un error. | from prettytable import PrettyTable | _____no_output_____ | MIT | Metodo de Runge-Kutta_Actividad.ipynb | ebucheli/F1014B |
En caso de que no tengas la librería la puedes instalar corriendo el siguiente comando en una celda,`!pip install PrettyTable`O puedes simplemente correr el comando `pip install PrettyTable` en una ventana de la terminal (mac y linux) o el prompt de anaconda (windows). | mytable = PrettyTable()
for x,y in zip(x_list,y_list):
mytable.add_row(["{:0.2f}".format(x),"{:0.6f}".format(y)])
print(mytable) | _____no_output_____ | MIT | Metodo de Runge-Kutta_Actividad.ipynb | ebucheli/F1014B |
This notebook is for prototyping data preparation for insertion into the database. Data for installer table.Need:- installer name- installer primary module manufacurer (e.g. mode of manufacturer name for all installers) | import pandas as pd
import numpy as np
def load_lbnl_data(replace_nans=True):
df1 = pd.read_csv('../data/TTS_LBNL_public_file_10-Dec-2019_p1.csv', encoding='latin-1', low_memory=False)
df2 = pd.read_csv('../data/TTS_LBNL_public_file_10-Dec-2019_p2.csv', encoding='latin-1', low_memory=False)
lbnl_df = pd.concat([df1, df2], axis=0)
if replace_nans:
lbnl_df.replace(-9999, np.nan, inplace=True)
lbnl_df.replace('-9999', np.nan, inplace=True)
return lbnl_df
lbnl_df = load_lbnl_data(replace_nans=False)
lbnl_df_nonan = load_lbnl_data()
lbnl_df.head()
lbnl_df.info()
# get mode of module manufacturer #1 for each install company
# doesn't seem to work when -9999 values are replaced with NaNs
manufacturer_modes = lbnl_df[['Installer Name', 'Module Manufacturer #1']].groupby('Installer Name').agg(lambda x: x.value_counts().index[0])
manufacturer_modes.head()
lbnl_zip_data = lbnl_df[['Battery System', 'Feed-in Tariff (Annual Payment)', 'Zip Code']].copy() | _____no_output_____ | Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Relpace missing values with 0 so it doesn't screw up the average calculation. | lbnl_zip_data.replace(-9999, 0, inplace=True)
lbnl_zip_groups = lbnl_zip_data.groupby('Zip Code').mean()
lbnl_zip_groups.head()
lbnl_zip_groups.info() | <class 'pandas.core.frame.DataFrame'>
Index: 36744 entries, 85351 to 99403
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Battery System 36744 non-null float64
1 Feed-in Tariff (Annual Payment) 36744 non-null float64
dtypes: float64(2)
memory usage: 861.2+ KB
| Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Drop missing zip codes. | lbnl_zip_groups = lbnl_zip_groups[~(lbnl_zip_groups.index == '-9999')]
lbnl_zip_groups.reset_index(inplace=True)
lbnl_zip_groups.head() | _____no_output_____ | Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Data for the Utility table.Need:- zipcode- utility name- ownership- service typeJoin EIA-861 report data with EIA IOU rates by zipcode | eia861_df = pd.read_excel('../data/Sales_Ult_Cust_2018.xlsx', header=[0, 1, 2])
def load_eia_iou_data():
iou_df = pd.read_csv('../data/iouzipcodes2017.csv')
noniou_df = pd.read_csv('../data/noniouzipcodes2017.csv')
eia_zipcode_df = pd.concat([iou_df, noniou_df], axis=0)
# zip codes are ints without zero padding
eia_zipcode_df['zip'] = eia_zipcode_df['zip'].astype('str')
eia_zipcode_df['zip'] = eia_zipcode_df['zip'].apply(lambda x: x.zfill(5))
return eia_zipcode_df
eia_zip_df = load_eia_iou_data()
eia_zip_df.info()
# util number here is eiaia in the IOU data
utility_number = eia861_df['Utility Characteristics', 'Unnamed: 1_level_1', 'Utility Number']
utility_name = eia861_df['Utility Characteristics', 'Unnamed: 2_level_1', 'Utility Name']
service_type = eia861_df['Utility Characteristics', 'Unnamed: 4_level_1', 'Service Type']
ownership = eia861_df['Utility Characteristics', 'Unnamed: 7_level_1', 'Ownership']
eia_utility_data = pd.concat([utility_number, utility_name, service_type, ownership], axis=1)
eia_utility_data.columns = eia_utility_data.columns.droplevel(0).droplevel(0)
eia_utility_data.head()
res_data = eia861_df['RESIDENTIAL'].copy()
res_data.head()
res_data[res_data['Revenues', 'Thousand Dollars'] == '.'] | _____no_output_____ | Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Missing data seems to be a period. | res_data.replace('.', np.nan, inplace=True)
for c in res_data.columns:
print(c)
res_data[c] = res_data[c].astype('float')
res_data['average_yearly_bill'] = res_data['Revenues', 'Thousand Dollars'] * 1000 / res_data['Customers', 'Count']
res_data.head()
res_data['average_yearly_kwh'] = (res_data['Sales', 'Megawatthours'] * 1000) / res_data['Customers', 'Count']
res_data.head() | _____no_output_____ | Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Get average bill and kWh used by zip code. | res_columns = ['average_yearly_bill', 'average_yearly_kwh']
res_data.columns = res_data.columns.droplevel(1)
res_data[res_columns].head()
eia_861_data = pd.concat([res_data[res_columns], eia_utility_data], axis=1)
eia_861_data.head()
eia_861_data_zipcode = eia_861_data.merge(eia_zip_df, left_on='Utility Number', right_on='eiaid')
eia_861_data_zipcode.head() | _____no_output_____ | Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Double-check res_rate | eia_861_data_zipcode['res_rate_recalc'] = eia_861_data_zipcode['average_yearly_bill'] / eia_861_data_zipcode['average_yearly_kwh']
eia_861_data_zipcode.head()
eia_861_data_zipcode.drop_duplicates(inplace=True)
eia_861_data_zipcode.tail()
eia_861_data_zipcode.info() | <class 'pandas.core.frame.DataFrame'>
Int64Index: 152322 entries, 0 to 159485
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 average_yearly_bill 143449 non-null float64
1 average_yearly_kwh 143449 non-null float64
2 Utility Number 152322 non-null float64
3 Utility Name 152322 non-null object
4 Service Type 152322 non-null object
5 Ownership 152322 non-null object
6 zip 152322 non-null object
7 eiaid 152322 non-null int64
8 utility_name 152322 non-null object
9 state 152322 non-null object
10 service_type 152322 non-null object
11 ownership 152322 non-null object
12 comm_rate 152322 non-null float64
13 ind_rate 152322 non-null float64
14 res_rate 152322 non-null float64
15 res_rate_recalc 143449 non-null float64
dtypes: float64(7), int64(1), object(8)
memory usage: 19.8+ MB
| Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Join project solar, ACS, EIA, and LBNL data to get main tableTry and save all of required data from bigquery. | # Set up GCP API
from google.cloud import bigquery
# Construct a BigQuery client object.
client = bigquery.Client()
# ACS US census data
ACS_DB = '`bigquery-public-data`.census_bureau_acs'
ACS_TABLE = 'zip_codes_2017_5yr'
# project sunroof
PSR_DB = '`bigquery-public-data`.sunroof_solar'
PSR_TABLE = 'solar_potential_by_postal_code'
# columns to keep from ACS data
ACS_COLS = ['geo_id', # zipcode
'median_age',
'housing_units',
'median_income',
'owner_occupied_housing_units',
'occupied_housing_units',
# housing units which will be used to calculate total single-family homes
'dwellings_1_units_detached',
'dwellings_1_units_attached',
'dwellings_2_units',
'dwellings_3_to_4_units',
'bachelors_degree_2',
'different_house_year_ago_different_city',
'different_house_year_ago_same_city']
query = """SELECT {} FROM {}.{} LIMIT 20;""".format(', '.join(ACS_COLS), ACS_DB, ACS_TABLE)
acs_df = pd.read_gbq(query)
acs_df
query = f"""SELECT geo_id,
median_age,
housing_units,
median_income,
owner_occupied_housing_units,
occupied_housing_units,
dwellings_1_units_detached + dwellings_1_units_attached + dwellings_2_units + dwellings_3_to_4_units AS family_homes,
bachelors_degree_2,
different_house_year_ago_different_city + different_house_year_ago_same_city AS moved_recently
FROM {ACS_DB}.{ACS_TABLE}
LIMIT 10;"""
test_df = pd.read_gbq(query)
test_df
acs_data_query = f"""SELECT geo_id,
median_age,
housing_units,
median_income,
owner_occupied_housing_units,
occupied_housing_units,
dwellings_1_units_detached + dwellings_1_units_attached + dwellings_2_units + dwellings_3_to_4_units AS family_homes,
bachelors_degree_2,
different_house_year_ago_different_city + different_house_year_ago_same_city AS moved_recently
FROM {ACS_DB}.{ACS_TABLE}"""
acs_data = pd.read_gbq(acs_data_query)
acs_data.to_csv('../data/acs_data.csv', index=False)
acs_data.shape
acs_data.head() | _____no_output_____ | Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Project sunroof data | psr_cols = ['region_name',
'percent_covered',
'percent_qualified',
'number_of_panels_total',
'kw_median',
'count_qualified',
'existing_installs_count']
psr_query = f"""SELECT region_name,
percent_covered,
percent_qualified,
number_of_panels_total,
kw_median,
(count_qualified - existing_installs_count) AS potential_installs
FROM {PSR_DB}.{PSR_TABLE}
LIMIT 10;
"""
test_df = pd.read_gbq(psr_query)
test_df
psr_query = f"""SELECT region_name,
percent_covered,
percent_qualified,
number_of_panels_total,
kw_median,
(count_qualified - existing_installs_count) AS potential_installs
FROM {PSR_DB}.{PSR_TABLE};
"""
psr_df = pd.read_gbq(psr_query)
psr_df.to_csv('../data/psr_data.csv')
psr_df.head() | _____no_output_____ | Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Join data for main data table | psr_acs = psr_df.merge(acs_data, left_on='region_name', right_on='geo_id', how='outer')
psr_acs.head()
psr_acs_lbnl = psr_acs.merge(lbnl_zip_groups, left_on='region_name', right_on='Zip Code', how='outer')
psr_acs_lbnl_eia = psr_acs_lbnl.merge(eia_861_data_zipcode, left_on='region_name', right_on='zip', how='outer')
psr_acs_lbnl_eia.head()
psr_acs_lbnl_eia.columns
psr_acs_lbnl_eia.info() | <class 'pandas.core.frame.DataFrame'>
Int64Index: 206079 entries, 0 to 206078
Data columns (total 34 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 region_name 42105 non-null object
1 percent_covered 42105 non-null float64
2 percent_qualified 42105 non-null float64
3 number_of_panels_total 42020 non-null float64
4 kw_median 42020 non-null float64
5 potential_installs 42105 non-null float64
6 geo_id 62530 non-null object
7 median_age 61729 non-null float64
8 housing_units 62530 non-null float64
9 median_income 59662 non-null float64
10 owner_occupied_housing_units 62530 non-null float64
11 occupied_housing_units 62530 non-null float64
12 family_homes 62530 non-null float64
13 bachelors_degree_2 62399 non-null float64
14 moved_recently 62399 non-null float64
15 Zip Code 54245 non-null object
16 Battery System 54245 non-null float64
17 Feed-in Tariff (Annual Payment) 54245 non-null float64
18 average_yearly_bill 143459 non-null float64
19 average_yearly_kwh 143459 non-null float64
20 Utility Number 152332 non-null float64
21 Utility Name 152332 non-null object
22 Service Type 152332 non-null object
23 Ownership 152332 non-null object
24 zip 152332 non-null object
25 eiaid 152332 non-null float64
26 utility_name 152332 non-null object
27 state 152332 non-null object
28 service_type 152332 non-null object
29 ownership 152332 non-null object
30 comm_rate 152332 non-null float64
31 ind_rate 152332 non-null float64
32 res_rate 152332 non-null float64
33 res_rate_recalc 143459 non-null float64
dtypes: float64(23), object(11)
memory usage: 55.0+ MB
| Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Looks like we have a lot of missing data. Combine the zip code columns to have one zip column with no missing data. | def fill_zips(x):
if not pd.isna(x['zip']):
return x['zip']
elif not pd.isna(x['Zip Code']):
return x['Zip Code']
elif not pd.isna(x['geo_id']):
return x['geo_id']
elif not pd.isna(x['region_name']):
return x['region_name']
else:
return np.nan
psr_acs_lbnl_eia['full_zip'] = psr_acs_lbnl_eia.apply(fill_zips, axis=1)
# columns we'll use in the same order as the DB table
cols_to_use = ['full_zip',
'percent_qualified',
'number_of_panels_total',
'kw_median',
'potential_installs',
'median_income',
'median_age',
'occupied_housing_units',
'owner_occupied_housing_units',
'family_homes',
'bachelors_degree_2',
'moved_recently',
'average_yearly_bill',
'average_yearly_kwh',
# note: installer ID has to be gotten from the installer table
'Battery System',
'Feed-in Tariff (Annual Payment)']
df_to_write = psr_acs_lbnl_eia[cols_to_use]
df_to_write.head()
df_to_write.describe()
df_to_write.info() | <class 'pandas.core.frame.DataFrame'>
Int64Index: 206079 entries, 0 to 206078
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 full_zip 206079 non-null object
1 percent_qualified 42105 non-null float64
2 number_of_panels_total 42020 non-null float64
3 kw_median 42020 non-null float64
4 potential_installs 42105 non-null float64
5 median_income 59662 non-null float64
6 median_age 61729 non-null float64
7 occupied_housing_units 62530 non-null float64
8 owner_occupied_housing_units 62530 non-null float64
9 family_homes 62530 non-null float64
10 bachelors_degree_2 62399 non-null float64
11 moved_recently 62399 non-null float64
12 average_yearly_bill 143459 non-null float64
13 average_yearly_kwh 143459 non-null float64
14 Battery System 54245 non-null float64
15 Feed-in Tariff (Annual Payment) 54245 non-null float64
dtypes: float64(15), object(1)
memory usage: 26.7+ MB
| Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
That's a lot of missing data. Something is wrong though, since there should only be ~41k zip codes, and this is showing 206k. | df_to_write.to_csv('../data/solar_metrics_data.csv', index=False)
import pandas as pd
df = pd.read_csv('../data/solar_metrics_data.csv')
df.drop_duplicates().shape
df['full_zip'].drop_duplicates().shape
df['full_zip'].head() | _____no_output_____ | Apache-2.0 | notebooks/Prototype_data_prep.ipynb | nateGeorge/udacity_dend_capstone |
Loading the data | !ls ../input_data/
train = pd.read_csv('../input_data/train.csv')
test = pd.read_csv('../input_data/train.csv')
train.head(10) | _____no_output_____ | MIT | titanic/nb/0 - Getting to know the data.ipynb | mapa17/Kaggle |
FeaturesFrom the competetion documentation|Feature | Explanation | Values ||--------|-------------|--------||survival| Survival |0 = No, 1 = Yes||pclass| Ticket class |1 = 1st, 2 = 2nd, 3 = 3rd||sex| Sex| male/female||Age| Age| in years ||sibsp| of siblings / spouses aboard the Titanic| numeric| |parch| of parents / children aboard the Titanic| numeric||ticket| Ticket number | string||fare| Passenger fare | numeric ||cabin| Cabin number | string||embarked| Port of Embarkation| C = Cherbourg, Q = Queenstown, S = Southampton|**Notes**pclass: A proxy for socio-economic status (SES)1st = Upper2nd = Middle3rd = Lowerage: Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5sibsp: The dataset defines family relations in this way...Sibling = brother, sister, stepbrother, stepsisterSpouse = husband, wife (mistresses and fiancés were ignored)parch: The dataset defines family relations in this way...Parent = mother, fatherChild = daughter, son, stepdaughter, stepsonSome children travelled only with a nanny, therefore parch=0 for them. Data Exploration | print('DataSet Size')
train.shape
print("Number of missing values")
pd.DataFrame(train.isna().sum(axis=0)).T
train['Pclass'].hist(grid=False)
train.describe()
print('Numaber of missing Cabin strings per class')
train[['Pclass', 'Cabin']].groupby('Pclass').agg(lambda x: x.isna().sum())
print('Numaber of missing Age per class')
train[['Pclass', 'Age']].groupby('Pclass').agg(lambda x: x.isna().sum())
train['Embarked'].value_counts()
print('Whats the influence of the port?')
train[['Embarked', 'Survived']].groupby('Embarked').agg(lambda x: x.sum())
print('Relative surival rate per port')
train[['Embarked', 'Survived']].groupby('Embarked').agg(lambda x: x.sum())['Survived'] / train['Embarked'].value_counts()
print('Number of Pclass per port')
train[['Pclass', 'Embarked']].groupby('Embarked').apply(lambda x: x['Pclass'].value_counts(sort=False))
# Mark if a cabin is known or not
train['UnknownCabin'] = train['Cabin'].isna()
train['UnknownAge'] = train['Age'].isna()
train['Sp-Pa'] = train['SibSp'] - train['Parch']
train.corr() | _____no_output_____ | MIT | titanic/nb/0 - Getting to know the data.ipynb | mapa17/Kaggle |
Correlation Interpretation* Pclass: the higher the pclass (worse class) decreases the chance of survival significantly (the riches first)* Age: higher age decreases survival slightly (the children first)* SipSp: more siblings has a light negative effect on survival (bigger families have it more difficult?)* Parch: having more parent figures increases the chance of survival* Fare: a higher fare increases the chance of survival significantly | fig, ax = plt.subplots()
classes = []
for pclass, df in train[['Pclass', 'Fare']].groupby('Pclass', as_index=False):
df['Fare'].plot(kind='kde', ax=ax)
classes.append(pclass)
ax.legend(classes)
ax.set_xlim(-10, 200)
fig, ax = plt.subplots()
classes = []
for pclass, df in train[['Pclass', 'Age']].groupby('Pclass', as_index=False):
df['Age'].plot(kind='kde', ax=ax)
classes.append(pclass)
ax.legend(classes)
g = sns.FacetGrid(train, col="Sex", row="Pclass")
g = g.map(plt.hist, "Survived", density=True, bins=[0, 1, 2], rwidth=0.8)
g = sns.FacetGrid(train, col="Survived", row="Pclass", hue='Sex')
g = g.map(lambda S, **kwargs: S.plot('hist', **kwargs, alpha=0.5), "Age")
g.add_legend()
g = sns.FacetGrid(train, col="Survived", row="Pclass")
g = g.map(lambda S, **kwargs: S.plot('hist', **kwargs), "Fare") | _____no_output_____ | MIT | titanic/nb/0 - Getting to know the data.ipynb | mapa17/Kaggle |
SkewnessIt is the degree of distortion from the normal distribution. It measures the lack of symmetry in data distribution.-- Positive Skewness means when the tail on the right side of the distribution is longer or fatter. The mean and median will be greater than the mode. -- Negative Skewness is when the tail of the left side of the distribution is longer or fatter than the tail on the right side. The mean and median will be less than the mode. | height_weight_data['Height'].skew()
height_weight_data['Weight'].skew()
listOfSeries = [pd.Series(['Male', 400, 300], index=height_weight_data.columns ),
pd.Series(['Female', 660, 370], index=height_weight_data.columns ),
pd.Series(['Female', 199, 410], index=height_weight_data.columns ),
pd.Series(['Male', 202, 390], index=height_weight_data.columns ),
pd.Series(['Female', 770, 210], index=height_weight_data.columns ),
pd.Series(['Male', 880, 203], index=height_weight_data.columns )]
height_weight_updated = height_weight_data.append(listOfSeries , ignore_index=True)
height_weight_updated.tail()
height_weight_updated[['Height']].plot(kind = 'hist', bins=100,
title = 'Height', figsize=(12, 8))
height_weight_updated[['Weight']].plot(kind = 'hist', bins=100,
title = 'weight', figsize=(12, 8))
height_weight_updated[['Height']].plot(kind = 'kde',
title = 'Height', figsize=(12, 8))
height_weight_updated[['Weight']].plot(kind = 'kde',
title = 'weight', figsize=(12, 8))
height_weight_updated['Height'].skew()
height_weight_updated['Weight'].skew() | _____no_output_____ | MIT | book/_build/html/_sources/descriptive/m3-demo-05-SkewnessAndKurtosisUsingPandas.ipynb | hossainlab/statswithpy |
KurtosisIt is actually the measure of outliers present in the distribution.-- High kurtosis in a data set is an indicator that data has heavy tails or outliers. -- Low kurtosis in a data set is an indicator that data has light tails or lack of outliers. | height_weight_data['Height'].kurtosis()
height_weight_data['Weight'].kurtosis()
height_weight_updated['Height'].kurtosis()
height_weight_updated['Weight'].kurtosis() | _____no_output_____ | MIT | book/_build/html/_sources/descriptive/m3-demo-05-SkewnessAndKurtosisUsingPandas.ipynb | hossainlab/statswithpy |
Imports | # ! pip install pandas
# ! pip install calender
# ! pip install numpy
# ! pip install datetime
# ! pip install matplotlib
# ! pip install collections
# ! pip install random
# ! pip install tqdm
# ! pip install sklearn
# ! pip install lightgbm
# ! pip install xgboost
import pandas as pd
import calendar
from datetime import datetime
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from IPython.display import clear_output as cclear
from lightgbm import LGBMRegressor
import joblib | _____no_output_____ | MIT | Training and Output.ipynb | iamshamikb/Walmart_M5_Accuracy |
Load data | def get_csv(X):
return pd.read_csv(path+X)
path = ''
calender, sales_train_ev, sales_train_val, sell_prices = get_csv('calendar.csv'), get_csv('sales_train_evaluation.csv'), \
get_csv('sales_train_validation.csv'), get_csv('sell_prices.csv')
non_numeric_col_list = ['id','item_id','dept_id','cat_id','store_id','state_id','d', 'date']
store_dict = {'CA_1':0, 'CA_2':0, 'CA_3':0, 'CA_4':0, 'WI_1':0, 'WI_2':0, 'WI_3':0, 'TX_1':0, 'TX_2':0, 'TX_3':0}
# Encoding Categorical Columns
def encode_cat_cols(new_df):
le = [0]*len(non_numeric_col_list)
for i in range(len(non_numeric_col_list)):
print("Encoding col: ", non_numeric_col_list[i])
le[i] = LabelEncoder()
new_df[non_numeric_col_list[i]] = le[i].fit_transform( new_df[non_numeric_col_list[i]] )
return le, new_df
# Function for reversing the long form
def reverse_long_form(le, X_test, train_out):
for i in range(len(non_numeric_col_list)):
X_test[non_numeric_col_list[i]] = le[i].inverse_transform(X_test[non_numeric_col_list[i]])
X_test['unit_sale'] = train_out
kk = X_test.pivot(index='id', columns='d')['unit_sale']
kk['id'] = kk.index
kk.reset_index(drop=True, inplace=True)
cols = list(kk)
cols = [cols[-1]] + cols[:-1]
kk = kk[cols]
return kk
# This function does feature engineering on sales_train_ev or sales_train_val
# There is another feature engineering function for adding columns to dataframe containing rows of only onw store
def feature_engineer(df):
day_columns = list(df.columns[6:])
other_var = list(df.columns[:6])
print('Melting out...')
df = pd.melt(df, id_vars = other_var, value_vars = day_columns)
df = df.rename(columns = {"variable": "d", "value": "unit_sale"})
# print(df.shape)
print('Adding Feature \'date\'...')
cal_dict = dict(zip(calender.d,calender.date))
df["date"] = df["d"].map(cal_dict)
# df.head()
print('Adding Feature \'day_of_week\'...')
day_of_week_dict = dict(zip(calender.d,calender.wday))
df['day_of_week'] = df["d"].map(day_of_week_dict)
print('Adding Feature \'month_no\'...')
month_no_dict = dict(zip(calender.d,calender.month))
df['month_no'] = df["d"].map(month_no_dict)
print('Adding Feature \'day_of_month\'...')
l = [i[-2:] for i in list(calender.date)]
calender['day_of_month'] = l
day_of_month_dict = dict(zip(calender.d,calender.day_of_month))
df['day_of_month'] = df["d"].map(day_of_month_dict)
print('Done.')
print('Here is how featurised data looks like...')
print(df.head(3))
return df
def reorder_data(df, csv_name):
df['sp_index'] = (df.index)
index_dict = dict(zip(df.id, df.sp_index))
df = df.drop('sp_index', axis=1)
kk = pd.read_csv(str(csv_name)+'.csv')
# kk = kk.drop(kk.columns[0], axis=1)
kk['sp_index'] = kk["id"].map(index_dict)
kk = kk.sort_values(by='sp_index', axis=0)
kk = kk.drop('sp_index', axis=1)
kk.to_csv(str(csv_name)+'.csv') | _____no_output_____ | MIT | Training and Output.ipynb | iamshamikb/Walmart_M5_Accuracy |
Train Function | def startegy7dot1(new_df, dept):
print('Using strategy ', strategy)
evaluation, validation = new_df.id.iloc[0].find('evaluation'), new_df.id.iloc[0].find('validation')
new_df = new_df[new_df.dept_id == dept]
print('Total rows: ', len(new_df))
rows_per_day = len(new_df[new_df.d == 'd_1'])
print('Rows per day: ', rows_per_day)
new_df['day_of_month'] = new_df['day_of_month'].fillna(0)
new_df = new_df.astype({'day_of_month': 'int32'}) # Making day_of_month column as int
new_df['date'] = new_df['date'].astype(str)
y = new_df.unit_sale # getting the label
new_df = new_df.drop('unit_sale', axis=1)
print('Encoding categorical features...')
le, new_df = encode_cat_cols(new_df) # Encoding Categorical Columns
X = new_df
ev_train_start, ev_train_end, val_train_start, val_train_end = rows_per_day*(0), rows_per_day*1941,\
rows_per_day*(0), rows_per_day*1913
model = LGBMRegressor(boosting_type = 'gbdt',
objective = 'tweedie',
tweedie_variance_power = 1.3,
metric = 'rmse',
subsample = 0.5,
subsample_freq = 1,
learning_rate = 0.03,
num_leaves = 3000,
min_data_in_leaf = 5000,
feature_fraction = 0.5,
max_bin = 300,
n_estimators = 500,
boost_from_average = False,
verbose = -1,
n_jobs = -1)
if evaluation != -1: # if evaluation data
print('Getting X_train, y_train...')
X_train, y_train = X.iloc[ev_train_start:ev_train_end], y[ev_train_start:ev_train_end]
X_test, y_test = X.iloc[ev_train_end:], y[ev_train_end:]
model_name = 'Eval_'+str(dept)+'.pkl'
joblib.dump(le, 'le_Eval_'+str(dept)+'.pkl')
if validation != -1: # if validation data
print('Getting X_train, y_train...')
X_train, y_train = X.iloc[val_train_start:val_train_end], y[val_train_start:val_train_end]
X_test, y_test = X.iloc[val_train_end:], y[val_train_end:]
model_name = 'Val_'+str(dept)+'.pkl'
joblib.dump(le, 'le_Val_'+str(dept)+'.pkl')
print('X_train len', len(X_train), 'y_train len', len(y_train), 'X_test len', len(X_test))
print('Fitting model...')
model.fit(X_train, y_train)
print('Fitting done. Saving model...')
joblib.dump(model, model_name)
joblib_model = joblib.load(model_name)
print('Making predictions...')
train_out = joblib_model.predict(X_test)
print('Done.')
return le, X_test, train_out
def get_output_of_eval_or_val(df):
main_out_df = pd.DataFrame()
list_dept = list(set(df.dept_id))
for i in list_dept:
print('Sequence of depts processing: ', list_dept)
print('Working on Dept: ', i)
le, X_test, train_out = startegy7dot1(df, i)
print('Reversing the long form...')
out_df = reverse_long_form(le, X_test, train_out)
main_out_df = pd.concat([main_out_df, out_df], ignore_index=False)
cclear()
l = [] # In this part we rename the columns to F_1, F_2 ....
for i in range(1,29):
l.append('F'+str(i))
l = ['id']+l
main_out_df.columns = l
return main_out_df | _____no_output_____ | MIT | Training and Output.ipynb | iamshamikb/Walmart_M5_Accuracy |
Run | strategy = 7.1
############## Eval data
%%time
df = sales_train_ev.copy()
empty_list = [0]*30490
for i in range(1942, 1970):
df['d_'+str(i)] = empty_list
df = feature_engineer(df)
%%time
main_out_df_ev = get_output_of_eval_or_val(df)
main_out_df_ev.to_csv('main_out_ev.csv', index=False)
############# Val Data
%%time
df = sales_train_val.copy()
empty_list = [0]*30490
for i in range(1914, 1942):
df['d_'+str(i)] = empty_list
df = feature_engineer(df)
%%time
main_out_df_val = get_output_of_eval_or_val(df)
main_out_df_val.to_csv('main_out_val.csv', index=False)
############# Reorder and Write the output
sales_train_val
reorder_data(sales_train_val, 'main_out_val')
reorder_data(sales_train_ev, 'main_out_ev')
main_out_ev = pd.read_csv('main_out_ev.csv')
main_out_val = pd.read_csv('main_out_val.csv')
sub_df = pd.concat([main_out_ev, main_out_val], ignore_index=True)
sub_df = sub_df.round(2)
sub_df.drop([sub_df.columns[0], sub_df.columns[-1]], axis=1, inplace=True)
sub_df
sub_df.to_csv('submission.csv', index=False) | _____no_output_____ | MIT | Training and Output.ipynb | iamshamikb/Walmart_M5_Accuracy |
Get text | with open('data/one_txt/blogger.txt') as f:
blogger = f.read()
with open('data/one_txt/wordpress.txt') as f:
wordpress = f.read()
txt = wordpress + blogger | _____no_output_____ | BSD-3-Clause | analyze_vocab.ipynb | quentin-auge/blogger |
Explore vocabulary | vocab_count = dict(Counter(txt))
vocab_freq = {char: count / len(txt) for char, count in vocab_count.items()}
sorted(zip(vocab_count.keys(), vocab_count.values(), vocab_freq.values()), key=operator.itemgetter(1))
full_vocab = sorted(vocab_count.keys(), key=vocab_count.get, reverse=True)
full_vocab = ''.join(full_vocab)
full_vocab | _____no_output_____ | BSD-3-Clause | analyze_vocab.ipynb | quentin-auge/blogger |
Normalize some of the text characters | def normalize_txt(txt):
# Non-breaking spaces -> regular spaces
txt = txt.replace('\xa0', ' ')
# Double quotes
double_quotes_chars = '“”»«'
for double_quotes_char in double_quotes_chars:
txt = txt.replace(double_quotes_char, '"')
# Single quotes
single_quote_chars = '‘`´’'
for single_quote_char in single_quote_chars:
txt = txt.replace(single_quote_char, "'")
# Triple dots
txt = txt.replace('…', '...')
# Hyphens
hyphen_chars = '–—'
for hyphen_char in hyphen_chars:
txt = txt.replace(hyphen_char, '-')
return txt
txt = normalize_txt(txt)
vocab_count = dict(Counter(txt))
full_vocab = sorted(vocab_count.keys(), key=vocab_count.get, reverse=True)
full_vocab = ''.join(full_vocab)
full_vocab | _____no_output_____ | BSD-3-Clause | analyze_vocab.ipynb | quentin-auge/blogger |
Restrict text to a sensible vocabulary | vocab = ' !"$%\'()+,-./0123456789:;=>?ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz~°àâçèéêëîïôùûœо€'
# Restrict text to vocabulary
def restrict_to_vocab(txt, vocab):
txt = ''.join(char for char in txt if char in vocab)
return txt
txt = restrict_to_vocab(txt, vocab)
# Double check new vocabulary
assert ''.join(sorted(set(txt))) == vocab | _____no_output_____ | BSD-3-Clause | analyze_vocab.ipynb | quentin-auge/blogger |
One-dimensional Lagrange Interpolation The problem of interpolation or finding the value of a function at an arbitrary point $X$ inside a given domain, provided we have discrete known values of the function inside the same domain is at the heart of the finite element method. In this notebooke we use Lagrange interpolation where the approximation $\hat f(x)$ to the function $f(x)$ is built like:\begin{equation}\hat f(x)={L^I}(x)f^I\end{equation}In the expression above $L^I$ represents the $I$ Lagrange Polynomial of order $n-1$ and $f^1, f^2,,...,f^n$ are the $n$ known values of the function. Here we are using the summation convention over the repeated superscripts.The $I$ Lagrange polynomial is given by the recursive expression:\begin{equation}{L^I}(x)=\prod_{J=1, J \ne I}^{n}{\frac{{\left( {x - {x^J}} \right)}}{{\left( {{x^I} - {x^J}} \right)}}} \end{equation}in the domain $x\in[-1.0,1.0]$.We wish to interpolate the function $ f(x)=x^3+4x^2-10 $ assuming we know its value at points $x=-1.0$, $x=1.0$ and $x=0.0$. | from __future__ import division
import numpy as np
from scipy import interpolate
import sympy as sym
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib notebook
| _____no_output_____ | MIT | notebooks/lagrange_interpolation.ipynb | jomorlier/FEM-Notes |
First we use a function to generate the Lagrage polynomial of order $order$ at point $i$ | def basis_lagrange(x_data, var, cont):
"""Find the basis for the Lagrange interpolant"""
prod = sym.prod((var - x_data[i])/(x_data[cont] - x_data[i])
for i in range(len(x_data)) if i != cont)
return sym.simplify(prod) | _____no_output_____ | MIT | notebooks/lagrange_interpolation.ipynb | jomorlier/FEM-Notes |
we now define the function $ f(x)=x^3+4x^2-10 $: | fun = lambda x: x**3 + 4*x**2 - 10
x = sym.symbols('x')
x_data = np.array([-1, 1, 0])
f_data = fun(x_data) | _____no_output_____ | MIT | notebooks/lagrange_interpolation.ipynb | jomorlier/FEM-Notes |
And obtain the Lagrange polynomials using: | basis = []
for cont in range(len(x_data)):
basis.append(basis_lagrange(x_data, x, cont))
sym.pprint(basis[cont])
| x⋅(x - 1)
─────────
2
x⋅(x + 1)
─────────
2
2
- x + 1
| MIT | notebooks/lagrange_interpolation.ipynb | jomorlier/FEM-Notes |
which are shown in the following plots/ | npts = 101
x_eval = np.linspace(-1, 1, npts)
basis_num = sym.lambdify((x), basis, "numpy") # Create a lambda function for the polynomials
plt.figure(figsize=(6, 4))
for k in range(3):
y_eval = basis_num(x_eval)[k]
plt.plot(x_eval, y_eval)
y_interp = sym.simplify(sum(f_data[k]*basis[k] for k in range(3)))
y_interp | _____no_output_____ | MIT | notebooks/lagrange_interpolation.ipynb | jomorlier/FEM-Notes |
Now we plot the complete approximating polynomial, the actual function and the points where the function was known. | y_interp = sum(f_data[k]*basis_num(x_eval)[k] for k in range(3))
y_original = fun(x_eval)
plt.figure(figsize=(6, 4))
plt.plot(x_eval, y_original)
plt.plot(x_eval, y_interp)
plt.plot([-1, 1, 0], f_data, 'ko')
plt.show() | _____no_output_____ | MIT | notebooks/lagrange_interpolation.ipynb | jomorlier/FEM-Notes |
Interpolation in 2 dimensionsWe can extend the concept of Lagrange interpolation to 2 or more dimensions.In the case of bilinear interpolation (2×2, 4 vertices) in $[-1, 1]^2$,the base functions are given by (**prove it**):\begin{align}N_0 = \frac{1}{4}(1 - x)(1 - y)\\N_1 = \frac{1}{4}(1 + x)(1 - y)\\N_2 = \frac{1}{4}(1 + x)(1 + y)\\N_3 = \frac{1}{4}(1 - x)(1 + y)\end{align}Let's see an example using piecewise bilinear interpolation. | def rect_grid(Lx, Ly, nx, ny):
u"""Create a rectilinear grid for a rectangle
The rectangle has dimensiones Lx by Ly. nx are
the number of nodes in x, and ny are the number of nodes
in y
"""
y, x = np.mgrid[-Ly/2:Ly/2:ny*1j, -Lx/2:Lx/2:nx*1j]
els = np.zeros(((nx - 1)*(ny - 1), 4), dtype=int)
for row in range(ny - 1):
for col in range(nx - 1):
cont = row*(nx - 1) + col
els[cont, :] = [cont + row, cont + row + 1,
cont + row + nx + 1, cont + row + nx]
return x.flatten(), y.flatten(), els
def interp_bilinear(coords, f_vals, grid=(10, 10)):
"""Piecewise bilinear interpolation for rectangular domains"""
x_min, y_min = np.min(coords, axis=0)
x_max, y_max = np.max(coords, axis=0)
x, y = np.mgrid[-1:1:grid[0]*1j,-1:1:grid[1]*1j]
N0 = (1 - x) * (1 - y)
N1 = (1 + x) * (1 - y)
N2 = (1 + x) * (1 + y)
N3 = (1 - x) * (1 + y)
interp_fun = N0 * f_vals[0] + N1 * f_vals[1] + N2 * f_vals[2] + N3 * f_vals[3]
interp_fun = 0.25*interp_fun
x, y = np.mgrid[x_min:x_max:grid[0]*1j, y_min:y_max:grid[1]*1j]
return x, y, interp_fun
def fun(x, y):
"""Monkey saddle function"""
return y**3 + 3*y*x**2
x_coords, y_coords, els = rect_grid(2, 2, 4, 4)
nels = els.shape[0]
z_coords = fun(x_coords, y_coords)
z_min = np.min(z_coords)
z_max = np.max(z_coords)
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111, projection='3d')
x, y = np.mgrid[-1:1:51j,-1:1:51j]
z = fun(x, y)
surf = ax.plot_surface(x, y, z, rstride=1, cstride=1, linewidth=0, alpha=0.6,
cmap="viridis")
plt.colorbar(surf, shrink=0.5, aspect=10)
ax.plot(x_coords, y_coords, z_coords, 'ok')
for k in range(nels):
x_vals = x_coords[els[k, :]]
y_vals = y_coords[els[k, :]]
coords = np.column_stack([x_vals, y_vals])
f_vals = fun(x_vals, y_vals)
x, y, z = interp_bilinear(coords, f_vals, grid=[4, 4])
inter = ax.plot_wireframe(x, y, z, color="black", cstride=3, rstride=3)
plt.xlabel(r"$x$", fontsize=18)
plt.ylabel(r"$y$", fontsize=18)
ax.legend([inter], [u"Interpolation"])
plt.show(); | _____no_output_____ | MIT | notebooks/lagrange_interpolation.ipynb | jomorlier/FEM-Notes |
The next cell change the format of the Notebook. | from IPython.core.display import HTML
def css_styling():
styles = open('../styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling() | _____no_output_____ | MIT | notebooks/lagrange_interpolation.ipynb | jomorlier/FEM-Notes |
**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/missing-values).**--- Now it's your turn to test your new knowledge of **missing values** handling. You'll probably find it makes a big difference. SetupThe questions will give you feedback on your work. Run the following cell to set up the feedback system. | # Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex2 import *
print("Setup Complete") | Setup Complete
| MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
In this exercise, you will work with data from the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course). Run the next code cell without changes to load the training and validation sets in `X_train`, `X_valid`, `y_train`, and `y_valid`. The test set is loaded in `X_test`. | import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
X_full = pd.read_csv('../input/train.csv', index_col='Id')
X_test_full = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
X_full.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = X_full.SalePrice
X_full.drop(['SalePrice'], axis=1, inplace=True)
# To keep things simple, we'll use only numerical predictors
X = X_full.select_dtypes(exclude=['object'])
X_test = X_test_full.select_dtypes(exclude=['object'])
# Break off validation set from training data
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,
random_state=0) | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Use the next code cell to print the first five rows of the data. | X_train.head() | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
You can already see a few missing values in the first several rows. In the next step, you'll obtain a more comprehensive understanding of the missing values in the dataset. Step 1: Preliminary investigationRun the code cell below without changes. | # Shape of training data (num_rows, num_columns)
print(X_train.shape)
# Number of missing values in each column of training data
missing_val_count_by_column = (X_train.isnull().sum())
print(missing_val_count_by_column[missing_val_count_by_column > 0])
#print(X_train.isnull().sum(axis=0)) | (1168, 36)
LotFrontage 212
MasVnrArea 6
GarageYrBlt 58
dtype: int64
| MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Part AUse the above output to answer the questions below. | # Fill in the line below: How many rows are in the training data?
num_rows = 1168
# Fill in the line below: How many columns in the training data
# have missing values?
num_cols_with_missing = 3
# Fill in the line below: How many missing entries are contained in
# all of the training data?
tot_missing = 276
# Check your answers
step_1.a.check()
# Lines below will give you a hint or solution code
#step_1.a.hint()
#step_1.a.solution() | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Part BConsidering your answers above, what do you think is likely the best approach to dealing with the missing values? | # Check your answer (Run this code cell to receive credit!)
step_1.b.check()
step_1.b.hint() | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
To compare different approaches to dealing with missing values, you'll use the same `score_dataset()` function from the tutorial. This function reports the [mean absolute error](https://en.wikipedia.org/wiki/Mean_absolute_error) (MAE) from a random forest model. | from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# Function for comparing different approaches
def score_dataset(X_train, X_valid, y_train, y_valid):
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(X_train, y_train)
preds = model.predict(X_valid)
return mean_absolute_error(y_valid, preds) | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Step 2: Drop columns with missing valuesIn this step, you'll preprocess the data in `X_train` and `X_valid` to remove columns with missing values. Set the preprocessed DataFrames to `reduced_X_train` and `reduced_X_valid`, respectively. | # Fill in the line below: get names of columns with missing values
missing_col_names = ['LotFrontage','MasVnrArea','GarageYrBlt'] # Your code here
include_column_names = [cols for cols in X_train.columns
if cols not in missing_col_names]
# Fill in the lines below: drop columns in training and validation data
reduced_X_train = X_train[include_column_names]
reduced_X_valid = X_valid[include_column_names]
#print(reduced_X_train)
# Check your answers
step_2.check()
# Lines below will give you a hint or solution code
#step_2.hint()
step_2.solution() | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Run the next code cell without changes to obtain the MAE for this approach. | print("MAE (Drop columns with missing values):")
print(score_dataset(reduced_X_train, reduced_X_valid, y_train, y_valid)) | MAE (Drop columns with missing values):
17837.82570776256
| MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Step 3: Imputation Part AUse the next code cell to impute missing values with the mean value along each column. Set the preprocessed DataFrames to `imputed_X_train` and `imputed_X_valid`. Make sure that the column names match those in `X_train` and `X_valid`. | from sklearn.impute import SimpleImputer
# Fill in the lines below: imputation
# Your code here
myimputer = SimpleImputer()
imputed_X_train = pd.DataFrame(myimputer.fit_transform(X_train))
imputed_X_valid = pd.DataFrame(myimputer.transform(X_valid))
# Fill in the lines below: imputation removed column names; put them back
imputed_X_train.columns = X_train.columns
imputed_X_valid.columns = X_valid.columns
# Check your answers
step_3.a.check()
# Lines below will give you a hint or solution code
#step_3.a.hint()
#step_3.a.solution() | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Run the next code cell without changes to obtain the MAE for this approach. | print("MAE (Imputation):")
print(score_dataset(imputed_X_train, imputed_X_valid, y_train, y_valid)) | MAE (Imputation):
18062.894611872147
| MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Part BCompare the MAE from each approach. Does anything surprise you about the results? Why do you think one approach performed better than the other? | # Check your answer (Run this code cell to receive credit!)
step_3.b.check()
#step_3.b.hint() | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Step 4: Generate test predictionsIn this final step, you'll use any approach of your choosing to deal with missing values. Once you've preprocessed the training and validation features, you'll train and evaluate a random forest model. Then, you'll preprocess the test data before generating predictions that can be submitted to the competition! Part AUse the next code cell to preprocess the training and validation data. Set the preprocessed DataFrames to `final_X_train` and `final_X_valid`. **You can use any approach of your choosing here!** in order for this step to be marked as correct, you need only ensure:- the preprocessed DataFrames have the same number of columns,- the preprocessed DataFrames have no missing values, - `final_X_train` and `y_train` have the same number of rows, and- `final_X_valid` and `y_valid` have the same number of rows. | # Preprocessed training and validation features
final_X_train = reduced_X_train
final_X_valid = reduced_X_valid
# Check your answers
step_4.a.check()
# Lines below will give you a hint or solution code
#step_4.a.hint()
#step_4.a.solution() | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Run the next code cell to train and evaluate a random forest model. (*Note that we don't use the `score_dataset()` function above, because we will soon use the trained model to generate test predictions!*) | # Define and fit model
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(final_X_train, y_train)
# Get validation predictions and MAE
preds_valid = model.predict(final_X_valid)
print("MAE (Your approach):")
print(mean_absolute_error(y_valid, preds_valid)) | MAE (Your approach):
17837.82570776256
| MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Part BUse the next code cell to preprocess your test data. Make sure that you use a method that agrees with how you preprocessed the training and validation data, and set the preprocessed test features to `final_X_test`.Then, use the preprocessed test features and the trained model to generate test predictions in `preds_test`.In order for this step to be marked correct, you need only ensure:- the preprocessed test DataFrame has no missing values, and- `final_X_test` has the same number of rows as `X_test`. | # Fill in the line below: preprocess test data
final_X_test = X_test[include_column_names]
# Fill in the line below: get test predictions
imputed_final_X_test = pd.DataFrame(myimputer.fit_transform(final_X_test))
imputed_final_X_test.columns = final_X_test.columns
final_X_test = imputed_final_X_test
preds_test = model.predict(final_X_test)
# Check your answers
step_4.b.check()
# Lines below will give you a hint or solution code
#step_4.b.hint()
#step_4.b.solution() | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
Run the next code cell without changes to save your results to a CSV file that can be submitted directly to the competition. | # Save test predictions to file
output = pd.DataFrame({'Id': X_test.index,
'SalePrice': preds_test})
output.to_csv('submission.csv', index=False) | _____no_output_____ | MIT | exercise-missing-values.ipynb | Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate |
**Installing the packages** | #!pip install simpy
import random
import simpy as sy | _____no_output_____ | Apache-2.0 | isye_6501_sim_hw/aarti_solution/Question 13.2.ipynb | oskrgab/isye-6644_project |
**Intializing the variable and creating the simulation** | #Declaring the variables
num_checkers = 10 # Number of Checkers
num_scanners = 5 #Number of scanners
wait_time = 0 #Initial Waiting Time set to 0
total_pax = 1 #Total number of passengers initialized to 1
num_pax = 100 #Overall Passengers set to 100
runtime = 500 #End simulation when runtime crosses 500 mins
arrival_rate = 50 #To simulate a busy airport
check_rate = 0.75 #As mentioned in the
class Airport(object):
def __init__(self, env, num_checkers, num_scanners):
self.env = env
self.checker = sy.Resource(env, num_checkers) #Number of boarding pass checkers
self.scanners = []
for i in range(0, num_scanners): #Number of scanners
self.scanners.append(sy.Resource(env))
def BP_check(self, pax):
service_time = random.expovariate(1/check_rate)
yield self.env.timeout(service_time)
def scan(self, pax):
scan_time = random.uniform(0.5, 1)
yield self.env.timeout(scan_time)
def Passenger(self, env, number):
global wait_time #global average wait time
global total_pax
arrival_time = env.now
scan_queue = [] #Every scanner has its own queue
with self.checker.request() as request:
yield request
yield env.process(self.BP_check(number))
for scanner in self.scanners:
scan_queue.append(len(scanner.queue)) #getting the length of each scanner
#Find the shortest scanner queue
min_index = min(scan_queue)
short_queue_index = scan_queue.index(min_index)
with self.scanners[short_queue_index].request() as request:
yield request
yield env.process(self.scan(number))
exit_time = env.now
wait_time += (exit_time - arrival_time)
total_pax += 1
def setup(self, env, num_pax):
yield env.timeout(random.expovariate(arrival_rate))
env.process(self.Passenger(env, num_pax))
#Running the simulation
env = sy.Environment()
api = Airport(env, num_checkers, num_scanners)
for i in range(0,num_pax):
env.process(api.setup(env, i))
env.run(until = runtime)
avg_wait_time = wait_time / total_pax
print("Avg waiting time is %f" %avg_wait_time)
| Avg waiting time is 8.104722
| Apache-2.0 | isye_6501_sim_hw/aarti_solution/Question 13.2.ipynb | oskrgab/isye-6644_project |
Conversion reaction=================== | import importlib
import os
import sys
import numpy as np
import amici
import amici.plotting
import pypesto
# sbml file we want to import
sbml_file = 'conversion_reaction/model_conversion_reaction.xml'
# name of the model that will also be the name of the python module
model_name = 'model_conversion_reaction'
# directory to which the generated model code is written
model_output_dir = 'tmp/' + model_name | _____no_output_____ | BSD-3-Clause | doc/example/conversion_reaction.ipynb | LarsFroehling/pyPESTO |
Compile AMICI model | # import sbml model, compile and generate amici module
sbml_importer = amici.SbmlImporter(sbml_file)
sbml_importer.sbml2amici(model_name,
model_output_dir,
verbose=False) | _____no_output_____ | BSD-3-Clause | doc/example/conversion_reaction.ipynb | LarsFroehling/pyPESTO |
Load AMICI model | # load amici module (the usual starting point later for the analysis)
sys.path.insert(0, os.path.abspath(model_output_dir))
model_module = importlib.import_module(model_name)
model = model_module.getModel()
model.requireSensitivitiesForAllParameters()
model.setTimepoints(amici.DoubleVector(np.linspace(0, 10, 11)))
model.setParameterScale(amici.ParameterScaling_log10)
model.setParameters(amici.DoubleVector([-0.3,-0.7]))
solver = model.getSolver()
solver.setSensitivityMethod(amici.SensitivityMethod_forward)
solver.setSensitivityOrder(amici.SensitivityOrder_first)
# how to run amici now:
rdata = amici.runAmiciSimulation(model, solver, None)
amici.plotting.plotStateTrajectories(rdata)
edata = amici.ExpData(rdata, 0.2, 0.0) | _____no_output_____ | BSD-3-Clause | doc/example/conversion_reaction.ipynb | LarsFroehling/pyPESTO |
Optimize | # create objective function from amici model
# pesto.AmiciObjective is derived from pesto.Objective,
# the general pesto objective function class
objective = pypesto.AmiciObjective(model, solver, [edata], 1)
# create optimizer object which contains all information for doing the optimization
optimizer = pypesto.ScipyOptimizer(method='ls_trf')
#optimizer.solver = 'bfgs|meigo'
# if select meigo -> also set default values in solver_options
#optimizer.options = {'maxiter': 1000, 'disp': True} # = pesto.default_options_meigo()
#optimizer.startpoints = []
#optimizer.startpoint_method = 'lhs|uniform|something|function'
#optimizer.n_starts = 100
# see PestoOptions.m for more required options here
# returns OptimizationResult, see parameters.MS for what to return
# list of final optim results foreach multistart, times, hess, grad,
# flags, meta information (which optimizer -> optimizer.get_repr())
# create problem object containing all information on the problem to be solved
problem = pypesto.Problem(objective=objective,
lb=[-2,-2], ub=[2,2])
# maybe lb, ub = inf
# other constraints: kwargs, class pesto.Constraints
# constraints on pams, states, esp. pesto.AmiciConstraints (e.g. pam1 + pam2<= const)
# if optimizer cannot handle -> error
# maybe also scaling / transformation of parameters encoded here
# do the optimization
result = pypesto.minimize(problem=problem,
optimizer=optimizer,
n_starts=10)
# optimize is a function since it does not need an internal memory,
# just takes input and returns output in the form of a Result object
# 'result' parameter: e.g. some results from somewhere -> pick best start points | _____no_output_____ | BSD-3-Clause | doc/example/conversion_reaction.ipynb | LarsFroehling/pyPESTO |
Visualize | # waterfall, parameter space, scatter plots, fits to data
# different functions for different plotting types
import pypesto.visualize
pypesto.visualize.waterfall(result)
pypesto.visualize.parameters(result) | _____no_output_____ | BSD-3-Clause | doc/example/conversion_reaction.ipynb | LarsFroehling/pyPESTO |
Data storage | # result = pypesto.storage.load('db_file.db') | _____no_output_____ | BSD-3-Clause | doc/example/conversion_reaction.ipynb | LarsFroehling/pyPESTO |
Profiles | # there are three main parts: optimize, profile, sample. the overall structure of profiles and sampling
# will be similar to optimizer like above.
# we intend to only have just one result object which can be reused everywhere, but the problem of how to
# not have one huge class but
# maybe simplified views on it for optimization, profiles and sampling is still to be solved
# profiler = pypesto.Profiler()
# result = pypesto.profile(problem, profiler, result=None)
# possibly pass result object from optimization to get good parameter guesses | _____no_output_____ | BSD-3-Clause | doc/example/conversion_reaction.ipynb | LarsFroehling/pyPESTO |
Sampling | # sampler = pypesto.Sampler()
# result = pypesto.sample(problem, sampler, result=None)
# open: how to parallelize. the idea is to use methods similar to those in pyabc for working on clusters.
# one way would be to specify an additional 'engine' object passed to optimize(), profile(), sample(),
# which in the default setting just does a for loop, but can also be customized. | _____no_output_____ | BSD-3-Clause | doc/example/conversion_reaction.ipynb | LarsFroehling/pyPESTO |
compute the error rate | df_sorted.head()
# df_sorted['diff'] = df_sorted['real'] - df_sorted['model']
df_sorted['error'] = abs(df_sorted['real'] - df_sorted['model'] )/ df_sorted['real']
df_sorted.head()
df_sorted['error'].mean()
df_sorted['error'].std()
df_sorted.shape | _____no_output_____ | MIT | cmp_cmp/ModelPredict_plot.ipynb | 3upperm2n/block_trace_analyzer |
Plot types by function1. Comparison2. Proportion3. Relationship4. Part to a whole5. Distribution6. Change over time | import numpy as np
import pandas as pd
import seaborn as sns
from plotly import tools
import plotly.plotly as py
from plotly.offline import init_notebook_mode,iplot
init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.figure_factory as ff
import matplotlib.pyplot as plt
data = pd.read_csv("Pokemon.csv")
print(data.shape)
data.head()
data.isna().sum()
fig = ff.create_distplot([data.HP],['HP'],bin_size=5)
iplot(fig, filename='Basic Distplot')
hist_data = [data['Attack'],data['Defense']]
group_labels = ['Attack','Defense']
fig = ff.create_distplot(hist_data, group_labels, bin_size=5, show_hist=False, show_rug=False)
iplot(fig, filename='Distplot of attack and defense')
trace0 = go.Box(y=data["HP"],name="HP", boxmean=True)
trace1 = go.Box(y=data["Attack"],name="Attack", boxmean=True)
trace2 = go.Box(y=data["Defense"],name="Defense", boxmean=True)
trace3 = go.Box(y=data["Sp. Atk"],name="Sp. Atk", boxmean=True)
trace4 = go.Box(y=data["Sp. Def"],name="Sp. Def", boxmean=True)
trace5 = go.Box(y=data["Speed"],name="Speed", boxmean=True)
dat = [trace0, trace1, trace2,trace3, trace4, trace5]
iplot(dat)
feats = ['HP','Attack','Defense','Sp. Atk','Sp. Def','Speed']
x = data[data["Name"] == "Charizard"]
t1 = go.Scatterpolar(theta = feats,
r = [x[i].values[0] for i in feats], fill = 'toself', name = x.Name.values[0])
layout = go.Layout(
polar = dict(
radialaxis = dict(
visible = True,
range = [0, 255]
)
),
showlegend = True,
title = "Stats of {}".format(x.Name.values[0])
)
dat = [t1]
fig = go.Figure(data=dat, layout=layout)
iplot(fig)
x = data[data["Name"] == "Charizard"]
t1 = go.Scatterpolar(theta = feats,
r = [x[i].values[0] for i in feats], fill = 'toself', name = x.Name.values[0])
y = data[data["Name"] == "Pikachu"]
t2 = go.Scatterpolar(theta = feats,
r = [y[i].values[0] for i in feats], fill = 'toself', name = y.Name.values[0])
layout = go.Layout(
polar = dict(
radialaxis = dict(
visible = True,
range = [0, 255]
)
),
showlegend = True,
title = "{} vs {}".format(x.Name.values[0],y.Name.values[0])
)
dat = [t1, t2]
fig = go.Figure(data=dat, layout=layout)
iplot(fig)
t1 = go.Scatter(
x = data["Defense"],
y = data["Attack"],
mode='markers',
marker=dict(
size=10
),
text=data["Name"]
)
dat = [t1]
layout = go.Layout(
showlegend = True,
font=dict(family='Courier New, monospace', size=10, color='#ffffff'),
title="Scatter plot of Defense vs Attack with Speed as colorscale",
xaxis = dict(showgrid = True),yaxis = dict(showgrid = True)
)
fig = go.Figure(data=dat, layout=layout)
iplot(fig, filename = "Scatterplot")
t1 = go.Scatter(
x = data["Defense"],
y = data["Attack"],
mode='markers',
marker=dict(
size = data["Speed"]/10,
),
text=data["Name"]
)
dat = [t1]
layout = go.Layout(
showlegend = True,
font=dict(family='Courier New, monospace', size=10, color='#ffffff'),
title="Scatter plot of Defense vs Attack with Speed as colorscale",
xaxis = dict(showgrid = True),yaxis = dict(showgrid = True)
)
fig = go.Figure(data=dat, layout=layout)
iplot(fig, filename = "Scatterplot")
t1 = go.Scatter(
x = data["Defense"],
y = data["Attack"],
mode='markers',
marker=dict(
size=10,
color = data["Speed"],
showscale=True
),
text=data["Name"]
)
dat = [t1]
layout = go.Layout(
showlegend = True,
font=dict(family='Courier New, monospace', size=10, color='#ffffff'),
title="Scatter plot of Defense vs Attack with Speed as colorscale",
xaxis = dict(showgrid = True),yaxis = dict(showgrid = True)
)
fig = go.Figure(data=dat, layout=layout)
iplot(fig, filename = "Scatterplot")
t = go.Scatter3d(
x=data["Speed"],
y=data["Attack"],
z=data["Defense"],
mode='markers',
marker=dict(
size=3,
line=dict(
color='rgba(217, 217, 217, 0.14)',
width=0.5
),
opacity=1
)
)
dat = [t]
layout = go.Layout(
margin=dict(
l=0,
r=0,
b=0,
t=0
),
xaxis=dict(title="Speed"),
yaxis=dict(title="Attack"),
title = "Speed vs Attack vs Defense"
)
fig = go.Figure(data=dat, layout=layout)
iplot(fig, filename='3d-scatter')
legendary_grouped = data.groupby(['Legendary', 'Generation']).mean()[['Attack', 'Defense', "Sp. Atk",
"Sp. Def", "Speed"]]
names = ["False" + "_" + str(i) for i in range(1,7)]
names.extend(["True" + "_" + str(i) for i in range(1,7)])
t1 = go.Bar(x=names, y=legendary_grouped.Attack, name="Attack")
t2 = go.Bar(x=names, y=legendary_grouped.Defense, name="Defense")
#layout = dict(barmode = 'group')
layout = dict(barmode = 'stack')
dat = [t1, t2]
figure = dict(data=dat,layout=layout)
iplot(figure)
t1 = go.Bar(x=names, y=legendary_grouped.Attack, name="Attack")
t2 = go.Bar(x=names, y=legendary_grouped.Defense, name="Defense")
layout = dict(barmode = 'group')
#layout = dict(barmode = 'stack')
dat = [t1, t2]
figure = dict(data=dat,layout=layout)
iplot(figure)
t1 = go.Bar(x=names, y=legendary_grouped.Attack, name="Attack")
t2 = go.Bar(x=names, y=legendary_grouped.Defense, name="Defense")
dat = [t1, t2]
figure = tools.make_subplots(rows=2, cols=1, subplot_titles=('Plot 1', 'Plot 2'))
figure.append_trace(t1, 1, 1)
figure.append_trace(t2, 2, 1)
iplot(figure)
t1 = go.Bar(x=names, y=legendary_grouped.Attack, name="Attack")
t2 = go.Bar(x=names, y=legendary_grouped.Defense, name="Defense")
dat = [t1, t2]
figure = tools.make_subplots(rows=1, cols=2, subplot_titles=('Attack', 'Defence'))
figure.append_trace(t1, 1, 1)
figure.append_trace(t2, 1, 2)
iplot(figure)
legendary_grouped = data.groupby(['Legendary', 'Generation']).mean()[['HP', 'Attack', 'Defense', "Sp. Atk",
"Sp. Def", "Speed"]]
data.head()
type_grouped = data.groupby("Type 1").mean()[['HP','Attack', 'Defense', "Sp. Atk", "Sp. Def", "Speed"]]
dat = [go.Bar(x=type_grouped.index, y=type_grouped[i], name=i) for i in type_grouped.columns]
figure = tools.make_subplots(rows=6, cols=1, subplot_titles=('Attack', 'Defence'))
for i in range(1,7):
figure.append_trace(dat[i-1], i, 1)
iplot(figure)
type_grouped
dat = [go.Bar(x=type_grouped.index, y=type_grouped[i], name=i) for i in type_grouped.columns]
#layout = dict(barmode = 'group')
layout = dict(barmode = 'stack')
figure = dict(data=dat,layout=layout)
iplot(figure)
dat = [go.Box(x=data[data["Type 1"] == i]["HP"], name="HP" + "_" + i, boxmean=True, orientation = "h",)
for i in list(set(data["Type 1"].tolist()))]
iplot(dat)
dat = [go.Box(y=data[data["Type 1"] == i]["HP"], name="HP" + "_" + i, boxmean=True, orientation = "v",)
for i in list(set(data["Type 1"].tolist()))]
iplot(dat)
corr_matrix_list = data[["HP", "Attack", "Defense", "Sp. Atk", "Sp. Def", "Speed",
"Generation", "Legendary"]].corr().values.tolist()
x_axis = data.corr().columns
y_axis = data.corr().index.values
trace = go.Heatmap(x=x_axis ,y=y_axis, z=corr_matrix_list, colorscale='Blues')
dat = [trace]
figure = dict(data=dat)
iplot(figure) | _____no_output_____ | MIT | Lectures/Lecture_5/Pokemon.ipynb | lev1khachatryan/DataVisualization |
DNN for image classification | from IPython.display import IFrame
IFrame(src= "https://cdnapisec.kaltura.com/p/2356971/sp/235697100/embedIframeJs/uiconf_id/41416911/partner_id/2356971?iframeembed=true&playerId=kaltura_player&entry_id=1_zltbjpto&flashvars[streamerType]=auto&flashvars[localizationCode]=en&flashvars[leadWithHTML5]=true&flashvars[sideBarContainer.plugin]=true&flashvars[sideBarContainer.position]=left&flashvars[sideBarContainer.clickToClose]=true&flashvars[chapters.plugin]=true&flashvars[chapters.layout]=vertical&flashvars[chapters.thumbnailRotator]=false&flashvars[streamSelector.plugin]=true&flashvars[EmbedPlayer.SpinnerTarget]=videoHolder&flashvars[dualScreen.plugin]=true&flashvars[hotspots.plugin]=1&flashvars[Kaltura.addCrossoriginToIframe]=true&&wid=1_gjz238z7" ,width='800', height='500') | _____no_output_____ | MIT | _build/jupyter_execute/Module3/m3_07.ipynb | liuzhengqi1996/math452_Spring2022 |
**Métodos de Bayes - Dados do Censo** Importação dos *dados* | import pickle
with open("/content/censo.pkl","rb") as f:
x_censo_treino,y_censo_treino,x_censo_teste,y_censo_teste = pickle.load(f)
| _____no_output_____ | MIT | Metodo_de_Bayes_Censo.ipynb | VictorCalebeIFG/MachineLearning_Python |
Treinar o modelo preditivo: | from sklearn.naive_bayes import GaussianNB
naive = GaussianNB()
naive.fit(x_censo_treino,y_censo_treino) | _____no_output_____ | MIT | Metodo_de_Bayes_Censo.ipynb | VictorCalebeIFG/MachineLearning_Python |
Previsões | previsoes = naive.predict(x_censo_teste)
previsoes | _____no_output_____ | MIT | Metodo_de_Bayes_Censo.ipynb | VictorCalebeIFG/MachineLearning_Python |
Verificando a acurracia do modelo | from sklearn.metrics import accuracy_score
accuracy_score(y_censo_teste,previsoes) | _____no_output_____ | MIT | Metodo_de_Bayes_Censo.ipynb | VictorCalebeIFG/MachineLearning_Python |
Como pode ser visto, a acuracia do modelo é bem baixa. As vezes será necessário modificar o preprocessamento ("de acordo com o professor, ao retirar a padronização do preprocessamento, neste algorítmo e para essa base de dados em específico, a acurácia aumentou para 75%") | from yellowbrick.classifier import ConfusionMatrix
cm = ConfusionMatrix(naive)
cm.fit(x_censo_treino,y_censo_treino)
cm.score(x_censo_teste,y_censo_teste) | _____no_output_____ | MIT | Metodo_de_Bayes_Censo.ipynb | VictorCalebeIFG/MachineLearning_Python |
Start here to begin with Stingray. | import numpy as np
%matplotlib inline
import warnings
warnings.filterwarnings('ignore') | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Creating a light curve | from stingray import Lightcurve | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
A `Lightcurve` object can be created in two ways :1. From an array of time stamps and an array of counts.2. From photon arrival times. 1. Array of time stamps and counts Create 1000 time stamps | times = np.arange(1000)
times[:10] | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Create 1000 random Poisson-distributed counts: | counts = np.random.poisson(100, size=len(times))
counts[:10] | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Create a Lightcurve object with the times and counts array. | lc = Lightcurve(times, counts) | WARNING:root:Checking if light curve is well behaved. This can take time, so if you are sure it is already sorted, specify skip_checks=True at light curve creation.
WARNING:root:Checking if light curve is sorted.
WARNING:root:Computing the bin time ``dt``. This can take time. If you know the bin time, please specify it at light curve creation
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
The number of data points can be counted with the `len` function. | len(lc) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Note the warnings thrown by the syntax above. By default, `stingray` does a number of checks on the data that is put into the `Lightcurve` class. For example, it checks whether it's evenly sampled. It also computes the time resolution `dt`. All of these checks take time. If you know the time resolution, it's a good idea to put it in manually. If you know that your light curve is well-behaved (for example, because you know the data really well, or because you've generated it yourself, as we've done above), you can skip those checks and save a bit of time: | dt = 1
lc = Lightcurve(times, counts, dt=dt, skip_checks=True) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
2. Photon Arrival TimesOften, you might have unbinned photon arrival times, rather than a light curve with time stamps and associated measurements. If this is the case, you can use the `make_lightcurve` method to turn these photon arrival times into a regularly binned light curve. | arrivals = np.loadtxt("photon_arrivals.txt")
arrivals[:10]
lc_new = Lightcurve.make_lightcurve(arrivals, dt=1) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
The time bins and respective counts can be seen with `lc.counts` and `lc.time` | lc_new.counts
lc_new.time | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
One useful feature is that you can explicitly pass in the start time and the duration of the observation. This can be helpful because the chance that a photon will arrive exactly at the start of the observation and the end of the observation is very small. In practice, when making multiple light curves from the same observation (e.g. individual light curves of multiple detectors, of for different energy ranges) this can lead to the creation of light curves with time bins that are *slightly* offset from one another. Here, passing in the total duration of the observation and the start time can be helpful. | lc_new = Lightcurve.make_lightcurve(arrivals, dt=1.0, tstart=1.0, tseg=9.0) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Properties A Lightcurve object has the following properties :1. `time` : numpy array of time values2. `counts` : numpy array of counts per bin values3. `counts_err`: numpy array with the uncertainties on the values in `counts`4. `countrate` : numpy array of counts per second5. `countrate_err`: numpy array of the uncertainties on the values in `countrate`4. `n` : Number of data points in the lightcurve5. `dt` : Time resolution of the light curve6. `tseg` : Total duration of the light curve7. `tstart` : Start time of the light curve8. `meancounts`: The mean counts of the light curve9. `meanrate`: The mean count rate of the light curve10. `mjdref`: MJD reference date (``tstart`` / 86400 gives the date in MJD at the start of the observation)11. `gti`:Good Time Intervals. They indicate the "safe" time intervals to be used during the analysis of the light curve. 12. `err_dist`: Statistic of the Lightcurve, it is used to calculate the uncertainties and other statistical values appropriately. It propagates to Spectrum classes | lc.n == len(lc) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Note that by default, `stingray` assumes that the user is passing a light curve in **counts per bin**. That is, the counts in bin $i$ will be the number of photons that arrived in the interval $t_i - 0.5\Delta t$ and $t_i + 0.5\Delta t$. Sometimes, data is given in **count rate**, i.e. the number of events that arrive within an interval of a *second*. The two will only be the same if the time resolution of the light curve is exactly 1 second.Whether the input data is in counts per bin or in count rate can be toggled via the boolean `input_counts` keyword argument. By default, this argument is set to `True`, and the code assumes the light curve passed into the object is in counts/bin. By setting it to `False`, the user can pass in count rates: | # times with a resolution of 0.1
dt = 0.1
times = np.arange(0, 100, dt)
times[:10]
mean_countrate = 100.0
countrate = np.random.poisson(mean_countrate, size=len(times))
lc = Lightcurve(times, counts=countrate, dt=dt, skip_checks=True, input_counts=False) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Internally, both `counts` and `countrate` attribute will be defined no matter what the user passes in, since they're trivially converted between each other through a multiplication/division with `dt: | print(mean_countrate)
print(lc.countrate[:10])
mean_counts = mean_countrate * dt
print(mean_counts)
print(lc.counts[:10]) | 10.0
[11.3 9.2 11. 9.7 10.1 10.2 10.3 10.1 12.4 8.9]
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Error Distributions in `stingray.Lightcurve`The instruments that record our data impose measurement noise on our measurements. Depending on the type of instrument, the statistical distribution of that noise can be different. `stingray` was originally developed with X-ray data in mind, where most data comes in the form of _photon arrival times_, which generate measurements distributed according to a Poisson distribution. By default, `err_dist` is assumed to Poisson, and this is the only statistical distribution currently fully supported. But you *can* put in your own errors (via `counts_err` or `countrate_err`). It'll produce a warning, and be aware that some of the statistical assumptions made about downstream products (e.g. the normalization of periodograms) may not be correct: | times = np.arange(1000)
mean_flux = 100.0 # mean flux
std_flux = 2.0 # standard deviation on the flux
# generate fluxes with a Gaussian distribution and
# an array of associated uncertainties
flux = np.random.normal(loc=mean_flux, scale=std_flux, size=len(times))
flux_err = np.ones_like(flux) * std_flux
lc = Lightcurve(times, flux, err=flux_err, err_dist="gauss", dt=1.0, skip_checks=True) | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Good Time Intervals`Lightcurve` (and most other core `stingray` classes) support the use of *Good Time Intervals* (or GTIs), which denote the parts of an observation that are reliable for scientific purposes. Often, GTIs introduce gaps (e.g. where the instrument was off, or affected by solar flares). By default. GTIs are passed and don't apply to the data within a `Lightcurve` object, but become relevant in a number of circumstances, such as when generating `Powerspectrum` objects. If no GTIs are given at instantiation of the `Lightcurve` class, an artificial GTI will be created spanning the entire length of the data set being passed in: | times = np.arange(1000)
counts = np.random.poisson(100, size=len(times))
lc = Lightcurve(times, counts, dt=1, skip_checks=True)
lc.gti
print(times[0]) # first time stamp in the light curve
print(times[-1]) # last time stamp in the light curve
print(lc.gti) # the GTIs generated within Lightcurve | 0
999
[[-5.000e-01 9.995e+02]]
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
GTIs are defined as a list of tuples: | gti = [(0, 500), (600, 1000)]
lc = Lightcurve(times, counts, dt=1, skip_checks=True, gti=gti)
print(lc.gti) | [[ 0 500]
[ 600 1000]]
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
We'll get back to these when we talk more about some of the methods that apply GTIs to the data. Operations Addition/Subtraction Two light curves can be summed up or subtracted from each other if they have same time arrays. | lc = Lightcurve(times, counts, dt=1, skip_checks=True)
lc_rand = Lightcurve(np.arange(1000), [500]*1000, dt=1, skip_checks=True)
lc_sum = lc + lc_rand
print("Counts in light curve 1: " + str(lc.counts[:5]))
print("Counts in light curve 2: " + str(lc_rand.counts[:5]))
print("Counts in summed light curve: " + str(lc_sum.counts[:5])) | Counts in light curve 1: [103 99 102 109 104]
Counts in light curve 2: [500 500 500 500 500]
Counts in summed light curve: [603 599 602 609 604]
| MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Negation A negation operation on the lightcurve object inverts the count array from positive to negative values. | lc_neg = -lc
lc_sum = lc + lc_neg
np.all(lc_sum.counts == 0) # All the points on lc and lc_neg cancel each other | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Indexing Count value at a particular time can be obtained using indexing. | lc[120] | _____no_output_____ | MIT | Lightcurve/Lightcurve tutorial.ipynb | jdswinbank/notebooks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.