markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Mapping QTL in BXD mice using R/qtl2[Karl Broman](https://kbroman.org)[](https://orcid.org/0000-0002-4914-6671),[Department of Biostatistics & Medical Informatics](https://www.biostat.wisc.edu), [University of Wisconsin–Madison](https://www.wisc.edu)Our aim in this tutorial is to demonstrate how to map quantitative trait loci (QTL) in the BXD mouse recombinant inbred lines using the [R/qtl2](https://kbroman.org/qtl2) software. We will first show how to download BXD phenotypes from [GeneNetwork2](http://gn2.genenetwork.org) using its API, via the R package [R/GNapi](https://github.com/rqtl/GNapi). At the end, we will use the [R/qtl2browse](https://github.com/rqtl/qtl2browse) package to display genome scan results using the [Genetics Genome Browser](https://github.com/chfi/purescript-genome-browser). Acquiring phenotypes with the GeneNetwork APIWe will first use the [GeneNetwork2](http://gn2.genenetwork.org) API to acquire BXD phenotypes to use for mapping. We will use the R package [R/GNapi](https://github.com/rqtl/GNapi). We first need to install the package, which is not available on [CRAN](https://cran.r-project.org), but is available via a private repository.```rinstall.packages("GNapi", repos="http://rqtl.org/qtl2cran")```We then load the package using `library()`.
install.packages("GNapi", repos="http://rqtl.org/qtl2cran") library(GNapi)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
The [R/GNapi](https://github.com/kbroman/GNapi) has a variety of functions. For an overview, see [its vignette](http://kbroman.org/GNapi/GNapi.html). Here we will just do one thing: use the function `get_pheno()` to grab BXD phenotype data. You provide a data set and a phenotype. Phenotype 10038 concerns "habituation", measured as a difference in locomotor activity between day 1 and day 3 in a 5 minute test trial.
phe <- get_pheno("BXD", "10038") head(phe)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
We will use just the column "value", but we need to include the strain names so that R/qtl2 can line up these phenotypes with the genotypes.
pheno <- setNames(phe$value, phe$sample_name) head(pheno)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
Acquire genotype data with R/qtl2We now want to get genotype data for the BXD panel. We first need to install the [R/qtl2](https://kbroman.org/qtl2) package. As with R/GNapi, it is not available on CRAN, but rather is distributed via a private repository.```rinstall.packages("qtl2", repos="http://rqtl.org/qtl2cran")```We then load the package with `library()`.
install.packages("qtl2", repos="http://rqtl.org/qtl2cran") library(qtl2)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
R/qtl2 uses a special file format for QTL data ([described here](https://kbroman.org/qtl2/assets/vignettes/input_files.html)). There are a variety of sample datasets [on Github](https://github.com/rqtl/qtl2data), including genotypes for the [mouse BXD lines](https://github.com/rqtl/qtl2data/tree/master/BXD), taken from [GeneNetwork2](http://gn2.genenetwork.org). We'll load those data directly into R using the function `read_cross2()`.
bxd_file <- "https://raw.githubusercontent.com/rqtl/qtl2data/master/BXD/bxd.zip" bxd <- read_cross2(bxd_file)
Warning message in recode_geno(sheet, genotypes): โ€œ117497 genotypes treated as missing: "H"โ€
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
We get a warning message about heterozygous genotypes being omitted. A number of the newer BXD lines have considerable heterozygosity. But these lines weren't among those phenotyped in the data we downloaded above, and so we don't need to worry about it here.The data are read into the object `bxd`, which has class `"cross2"`. It contains the genotypes and well as genetic and physical marker maps. There are also phenotype data (which we will ignore).We can get a quick summary of the dataset with `summary()`. For reasons that I don't understand, it gets printed as a big mess within this Jupyter notebook, and so here we need to surround it with `print()` to get the intended output.
print( summary(bxd) )
Object of class cross2 (crosstype "risib") Total individuals 198 No. genotyped individuals 198 No. phenotyped individuals 198 No. with both geno & pheno 198 No. phenotypes 5806 No. covariates 0 No. phenotype covariates 1 No. chromosomes 20 Total markers 7320 No. markers by chr: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 X 636 583 431 460 470 449 437 319 447 317 375 308 244 281 247 272 291 250 310 193
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
The first step in QTL analysis is to calculate genotype probabilities at putative QTL positions across the genome, conditional on the observed marker data. This allows us that consider positions between the genotyped markers and to allow for the presence of genotyping errors.First, we need to define the positions that we will consider. We will take the observed marker positions and insert a set of "pseudomarkers" (marker-like positions that are not actually markers). We do this with the function `insert_pseudomarkers()`. We pull the genetic map (`gmap`) out of the `bxd` data as our basic map; `step=0.2` and `stepwidth="max"` mean to insert pseudomarkers so that no two adjacent markers or pseudomarkers are more than 0.2 cM apart. That is, in any marker interval that is greater than 0.2 cM, we will insert one or more evenly spaced pseudomarkers, so that the intervals between markers and pseudomarkers are no more than 0.2 cM.
gmap <- insert_pseudomarkers(bxd$gmap, step=0.2, stepwidth="max")
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
We will be interested in results with respect to the physical map (in Mbp), and so we need to create a corresponding map that includes the pseudomarker positions. We do this with the function `interp_map()`, which uses linear interpolation to get estimated positions for the inserted pseudomarkers.
pmap <- interp_map(gmap, bxd$gmap, bxd$pmap)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
We can now proceed with calculating genotype probabilities for all BXD strains at all markers and pseudomarkers, conditional on the observed marker genotypes and assuming a 0.2% genotyping error rate. We use the [Carter-Falconer](https://doi.org/10.1007/BF02996226) map function to convert between cM and recombination fractions; it assumes a high degree of crossover interference, appropriate for the mouse.
pr <- calc_genoprob(bxd, gmap, error_prob=0.002, map_function="c-f")
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
In the QTL analysis, we will fit a linear mixed model to account for polygenic background effects. We will use the "leave one chromosome out" (LOCO) method for this. When we scan a chromosome for a QTL, we include a polygenic term with a kinship matrix derived from all other chromosomes. We first need to calculate this set of kinship matrices, which we do with the function `calc_kinship()`. The second argument, `"loco"`, indicates that we want to calculate a vector of kinship matrices, each derived from the genotype probabilities but leaving one chromosome out.
k <- calc_kinship(pr, "loco")
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
Now, finally, we're ready to perform the genome scan, which we do with the function `scan1()`. It takes the genotype probabilities and a set of phenotypes (here, just one phenotype). If kinship matrices are provided (here, as `k`), the scan is performed using a linear mixed model. To make the calculations faster, the residual polygenic variance is first estimated without including any QTL effect and is then taking to be fixed and known during the scan.
out <- scan1(pr, pheno, k)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
The output of `scan1()` is a matrix of LOD scores; the rows are marker/pseudomarker positions and the columns are phenotypes. We can plot the results using `plot.scan1()`, and we can just use `plot()` because it uses the class of its input to determine what plot to make.Here I'm using the package [repr](https://cran.r-project.org/package=repr) to control the height and width of the plot that's created. I installed it with `install.packages("repr")`. You can ignore that part, if you want.
library(repr) options(repr.plot.height=4, repr.plot.width=8) par(mar=c(5.1, 4.1, 0.6, 0.6)) plot(out, pmap)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
There's a clear QTL on chromosome 8. We can make a plot of just that chromosome with the argument `chr=15`.
plot(out, pmap, chr=15)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
Let's create a plot of the phenotype vs the genotype at the inferred QTL. We first need to identify the QTL location, which we can do using `max()`. We then use `maxmarg()` to get inferred genotypes at the inferred QTL.
mx <- max(out, pmap) g_imp <- maxmarg(pr, pmap, chr=mx$chr, pos=mx$pos, return_char=TRUE)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
We can use `plot_pxg()` to plot the phenotype as a function of QTL genotype. We use `swap_axes=TRUE` to have the phenotype on the x-axis and the genotype on the y-axis, rather than the other way around. Here we see that the BB and DD genotypes are completely separated, phenotypically.
par(mar=c(5.1, 4.1, 0.6, 0.6)) plot_pxg(g_imp, pheno, swap_axes=TRUE, xlab="Habituation phenotype")
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
Browse genome scan results with Genetics Genome BrowserThe [Genetics Genome Browser](https://github.com/chfi/purescript-genome-browser) is a fast, lightweight, [purescript]-based genome browser developed for browsing GWAS or QTL analysis results. We'll use the R package [R/qtl2browse](https://github.com/rqtl/qtl2browse) to view our QTL mapping results in the GGB.We first need to install the R/qtl2browse package, again from a private [CRAN](https://cran.r-project.org)-like repository.```rinstall.packages("qtl2browse", repos="http://rqtl.org/qtl2cran")```We then load the package and use its one function, `browse()`, which takes the `scan1()` output and corresponding physical map (in Mbp). This will open the Genetics Genome Browser in a separate tab in your web browser.
library(qtl2browse) browse(out, pmap)
_____no_output_____
CC0-1.0
CTC2019_tutorial.ipynb
genenetwork/Teaching_CTC2019
Structured and time series data This notebook contains an implementation of the third place result in the Rossman Kaggle competition as detailed in Guo/Berkhahn's [Entity Embeddings of Categorical Variables](https://arxiv.org/abs/1604.06737).The motivation behind exploring this architecture is it's relevance to real-world application. Most data used for decision making day-to-day in industry is structured and/or time-series data. Here we explore the end-to-end process of using neural networks with practical structured data problems.
%matplotlib inline %reload_ext autoreload %autoreload 2 from fastai.structured import * from fastai.column_data import * np.set_printoptions(threshold=50, edgeitems=20) PATH='data/rossmann/'
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Create datasets In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them [here](http://files.fast.ai/part2/lesson14/rossmann.tgz).For completeness, the implementation used to put them together is included below.
def concat_csvs(dirname): path = f'{PATH}{dirname}' filenames=glob(f"{PATH}/*.csv") wrote_header = False with open(f"{path}.csv","w") as outputfile: for filename in filenames: name = filename.split(".")[0] with open(filename) as f: line = f.readline() if not wrote_header: wrote_header = True outputfile.write("file,"+line) for line in f: outputfile.write(name + "," + line) outputfile.write("\n") # concat_csvs('googletrend') # concat_csvs('weather')
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Feature Space:* train: Training set provided by competition* store: List of stores* store_states: mapping of store to the German state they are in* List of German state names* googletrend: trend of certain google keywords over time, found by users to correlate well w/ given data* weather: weather* test: testing set
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We'll be using the popular data manipulation framework `pandas`. Among other things, pandas allows you to manipulate tables/data frames in python as one would in a database.We're going to go ahead and load all of our csv's as dataframes into the list `tables`.
tables = [pd.read_csv(f'{PATH}{fname}.csv', low_memory=False) for fname in table_names] from IPython.display import HTML, display
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We can use `head()` to get a quick look at the contents of each table:* train: Contains store information on a daily basis, tracks things like sales, customers, whether that day was a holdiay, etc.* store: general info about the store including competition, etc.* store_states: maps store to state it is in* state_names: Maps state abbreviations to names* googletrend: trend data for particular week/state* weather: weather conditions for each state* test: Same as training table, w/o sales and customers
for t in tables: display(t.head())
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
This is very representative of a typical industry dataset.The following returns summarized aggregate information to each table accross each field.
for t in tables: display(DataFrameSummary(t).summary())
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Data Cleaning / Feature Engineering As a structured data problem, we necessarily have to go through all the cleaning and feature engineering, even though we're using a neural network.
train, store, store_states, state_names, googletrend, weather, test = tables len(train),len(test)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
train.StateHoliday = train.StateHoliday!='0' test.StateHoliday = test.StateHoliday!='0'
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
`join_df` is a function for joining tables on specific fields. By default, we'll be doing a left outer join of `right` on the `left` argument using the given fields for each table.Pandas does joins using the `merge` method. The `suffixes` argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "\_y" to those on the right.
def join_df(left, right, left_on, right_on=None, suffix='_y'): if right_on is None: right_on = left_on return left.merge(right, how='left', left_on=left_on, right_on=right_on, suffixes=("", suffix))
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Join weather/state names.
weather = join_df(weather, state_names, "file", "StateName")
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use `.loc[rows, cols]` to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list `googletrend.State=='NI'` and selecting "State".
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0] googletrend['State'] = googletrend.file.str.split('_', expand=True)[2] googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.You should *always* consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
add_datepart(weather, "Date", drop=False) add_datepart(googletrend, "Date", drop=False) add_datepart(train, "Date", drop=False) add_datepart(test, "Date", drop=False)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.*Aside*: Why note just do an inner join?If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after of rows for inner join is equivalent, but requires keeping track of before/after row 's. Outer join is easier.)
store = join_df(store, store_states, "Store") len(store[store.State.isnull()]) joined = join_df(train, store, "Store") joined_test = join_df(test, store, "Store") len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()]) joined = join_df(joined, googletrend, ["State","Year", "Week"]) joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"]) len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()]) joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE')) joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE')) len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()]) joined = join_df(joined, weather, ["State","Date"]) joined_test = join_df(joined_test, weather, ["State","Date"]) len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()]) for df in (joined, joined_test): for c in df.columns: if c.endswith('_y'): if c in df.columns: df.drop(c, inplace=True, axis=1)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Next we'll fill in missing values to avoid complications with `NA`'s. `NA` (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary *signal value* that doesn't otherwise appear in the data.
for df in (joined,joined_test): df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32) df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32) df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32) df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of `apply()` in mapping a function across dataframe values.
for df in (joined,joined_test): df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear, month=df.CompetitionOpenSinceMonth, day=15)) df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We'll replace some erroneous / outlying data.
for df in (joined,joined_test): df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0 df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
for df in (joined,joined_test): df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30 df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24 joined.CompetitionMonthsOpen.unique()
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Same process for Promo dates.
for df in (joined,joined_test): df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week( x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime)) df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days for df in (joined,joined_test): df.loc[df.Promo2Days<0, "Promo2Days"] = 0 df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0 df["Promo2Weeks"] = df["Promo2Days"]//7 df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0 df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25 df.Promo2Weeks.unique() joined.to_feather(f'{PATH}joined') joined_test.to_feather(f'{PATH}joined_test')
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Durations It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:* Running averages* Time until next event* Time since last eventThis is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.We'll define a function `get_elapsed` for cumulative counting across a sorted dataframe. Given a particular field `fld` to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
def get_elapsed(fld, pre): day1 = np.timedelta64(1, 'D') last_date = np.datetime64() last_store = 0 res = [] for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values): if s != last_store: last_date = np.datetime64() last_store = s if v: last_date = d res.append(((d-last_date).astype('timedelta64[D]') / day1)) df[pre+fld] = res
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We'll be applying this to a subset of columns:
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"] df = train[columns] df = test[columns]
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Let's walk through an example.Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call `add_elapsed('SchoolHoliday', 'After')`:This will apply to each row with School Holiday:* A applied to every row of the dataframe in order of store and date* Will add to the dataframe the days since seeing a School Holiday* If we sort in the other direction, this will count the days until another holiday.
fld = 'SchoolHoliday' df = df.sort_values(['Store', 'Date']) get_elapsed(fld, 'After') df = df.sort_values(['Store', 'Date'], ascending=[True, False]) get_elapsed(fld, 'Before')
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We'll do this for two more fields.
fld = 'StateHoliday' df = df.sort_values(['Store', 'Date']) get_elapsed(fld, 'After') df = df.sort_values(['Store', 'Date'], ascending=[True, False]) get_elapsed(fld, 'Before') fld = 'Promo' df = df.sort_values(['Store', 'Date']) get_elapsed(fld, 'After') df = df.sort_values(['Store', 'Date'], ascending=[True, False]) get_elapsed(fld, 'Before')
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We're going to set the active index to Date.
df = df.set_index("Date")
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Then set null values from elapsed field calculations to 0.
columns = ['SchoolHoliday', 'StateHoliday', 'Promo'] for o in ['Before', 'After']: for p in columns: a = o+p df[a] = df[a].fillna(0).astype(int)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Next we'll demonstrate window functions in pandas to calculate rolling quantities.Here we're sorting by date (`sort_index()`) and counting the number of events of interest (`sum()`) defined in `columns` in the following week (`rolling()`), grouped by Store (`groupby()`). We do the same in the opposite direction.
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum() fwd = df[['Store']+columns].sort_index(ascending=False ).groupby("Store").rolling(7, min_periods=1).sum()
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Next we want to drop the Store indices grouped together in the window function.Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.
bwd.drop('Store',1,inplace=True) bwd.reset_index(inplace=True) fwd.drop('Store',1,inplace=True) fwd.reset_index(inplace=True) df.reset_index(inplace=True)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Now we'll merge these values onto the df.
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw']) df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw']) df.drop(columns,1,inplace=True) df.head()
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
df.to_feather(f'{PATH}df') df = pd.read_feather(f'{PATH}df') df["Date"] = pd.to_datetime(df.Date) df.columns joined = join_df(joined, df, ['Store', 'Date']) joined_test = join_df(joined_test, df, ['Store', 'Date'])
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
joined = joined[joined.Sales!=0]
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We'll back this up as well.
joined.reset_index(inplace=True) joined_test.reset_index(inplace=True) joined.to_feather(f'{PATH}joined') joined_test.to_feather(f'{PATH}joined_test')
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We now have our final set of engineered features.While these steps were explicitly outlined in the paper, these are all fairly typical feature engineering steps for dealing with time series data and are practical in any similar setting. Create features
joined = pd.read_feather(f'{PATH}joined') joined_test = pd.read_feather(f'{PATH}joined_test') joined.head().T.head(40)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Now that we've engineered all our features, we need to convert to input compatible with a neural network.This includes converting categorical variables into contiguous integers or one-hot encodings, normalizing continuous features to standard normal, etc...
cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'CompetitionMonthsOpen', 'Promo2Weeks', 'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear', 'State', 'Week', 'Events', 'Promo_fw', 'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw', 'SchoolHoliday_fw', 'SchoolHoliday_bw'] contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC', 'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h', 'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE', 'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday'] n = len(joined); n dep = 'Sales' joined = joined[cat_vars+contin_vars+[dep, 'Date']].copy() joined_test[dep] = 0 joined_test = joined_test[cat_vars+contin_vars+[dep, 'Date', 'Id']].copy() for v in cat_vars: joined[v] = joined[v].astype('category').cat.as_ordered() apply_cats(joined_test, joined) for v in contin_vars: joined[v] = joined[v].fillna(0).astype('float32') joined_test[v] = joined_test[v].fillna(0).astype('float32')
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We're going to run on a sample.
idxs = get_cv_idxs(n, val_pct=150000/n) joined_samp = joined.iloc[idxs].set_index("Date") samp_size = len(joined_samp); samp_size
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
To run on the full dataset, use this instead:
samp_size = n joined_samp = joined.set_index("Date")
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We can now process our data...
joined_samp.head(2) df, y, nas, mapper = proc_df(joined_samp, 'Sales', do_scale=True) yl = np.log(y) joined_test = joined_test.set_index("Date") df_test, _, nas, mapper = proc_df(joined_test, 'Sales', do_scale=True, skip_flds=['Id'], mapper=mapper, na_dict=nas) df.head(2)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
In time series data, cross-validation is not random. Instead, our holdout data is generally the most recent data, as it would be in real application. This issue is discussed in detail in [this post](http://www.fast.ai/2017/11/13/validation-sets/) on our web site.One approach is to take the last 25% of rows (sorted by date) as our validation set.
train_ratio = 0.75 # train_ratio = 0.9 train_size = int(samp_size * train_ratio); train_size val_idx = list(range(train_size, len(df)))
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
An even better option for picking a validation set is using the exact same length of time period as the test set uses - this is implemented here:
val_idx = np.flatnonzero( (df.index<=datetime.datetime(2014,9,17)) & (df.index>=datetime.datetime(2014,8,1))) val_idx=[0]
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
DL We're ready to put together our models.Root-mean-squared percent error is the metric Kaggle used for this competition.
def inv_y(a): return np.exp(a) def exp_rmspe(y_pred, targ): targ = inv_y(targ) pct_var = (targ - inv_y(y_pred))/targ return math.sqrt((pct_var**2).mean()) max_log_y = np.max(yl) y_range = (0, max_log_y*1.2)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We can create a ModelData object directly from out data frame.
md = ColumnarModelData.from_data_frame(PATH, val_idx, df, yl.astype(np.float32), cat_flds=cat_vars, bs=128, test_df=df_test)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Some categorical variables have a lot more levels than others. Store, in particular, has over a thousand!
cat_sz = [(c, len(joined_samp[c].cat.categories)+1) for c in cat_vars] cat_sz
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
We use the *cardinality* of each variable (that is, its number of unique values) to decide how large to make its *embeddings*. Each level will be associated with a vector with length defined as below.
emb_szs = [(c, min(50, (c+1)//2)) for _,c in cat_sz] emb_szs m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars), 0.04, 1, [1000,500], [0.001,0.01], y_range=y_range) lr = 1e-3 m.lr_find() m.sched.plot(100)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Sample
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars), 0.04, 1, [1000,500], [0.001,0.01], y_range=y_range) lr = 1e-3 m.fit(lr, 3, metrics=[exp_rmspe]) m.fit(lr, 5, metrics=[exp_rmspe], cycle_len=1) m.fit(lr, 2, metrics=[exp_rmspe], cycle_len=4)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
All
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars), 0.04, 1, [1000,500], [0.001,0.01], y_range=y_range) lr = 1e-3 m.fit(lr, 1, metrics=[exp_rmspe]) m.fit(lr, 3, metrics=[exp_rmspe]) m.fit(lr, 3, metrics=[exp_rmspe], cycle_len=1)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Test
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars), 0.04, 1, [1000,500], [0.001,0.01], y_range=y_range) lr = 1e-3 m.fit(lr, 3, metrics=[exp_rmspe]) m.fit(lr, 3, metrics=[exp_rmspe], cycle_len=1) m.save('val0') m.load('val0') x,y=m.predict_with_targs() exp_rmspe(x,y) pred_test=m.predict(True) pred_test = np.exp(pred_test) joined_test['Sales']=pred_test csv_fn=f'{PATH}tmp/sub.csv' joined_test[['Id','Sales']].to_csv(csv_fn, index=False) FileLink(csv_fn)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
RF
from sklearn.ensemble import RandomForestRegressor ((val,trn), (y_val,y_trn)) = split_by_idx(val_idx, df.values, yl) m = RandomForestRegressor(n_estimators=40, max_features=0.99, min_samples_leaf=2, n_jobs=-1, oob_score=True) m.fit(trn, y_trn); preds = m.predict(val) m.score(trn, y_trn), m.score(val, y_val), m.oob_score_, exp_rmspe(preds, y_val)
_____no_output_____
Apache-2.0
courses/dl1/lesson3-rossman.ipynb
linbojin/fastai
Dataset Marlon Soybean, CBOT Soybean Futures + ( Global Historical Climatology Network (GHCN) filtered by USDA-NASS-soybeans-production_bushels-2015) Soybean, CBOT Soybean Futures- https://blog.quandl.com/api-for-commodity-data- http://www.quandl.com/api/v3/datasets/CHRIS/CME_S1/ Global Historical Climatology Network (GHCN)- https://www.ncdc.noaa.gov/data-access/land-based-station-data/land-based-datasets/global-historical-climatology-network-ghcn- FTP: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/ USDA-NASS-soybeans-production_bushels-2015- https://usda-reports.nautilytics.com/?crop=soybeans&statistic=production_dollars&year=2007- https://www.nass.usda.gov/Data_Visualization/index.phphttps://github.com/aaronpenne/get_noaa_ghcn_data.git Imports
%matplotlib inline import matplotlib.pyplot as plt import matplotlib.ticker as mticker import pandas as pd import numpy as np import os from six.moves import urllib from ftplib import FTP from io import StringIO from IPython.display import clear_output from functools import reduce import tarfile import subprocess #subprocess.run(["ls", "-l"]) import zipfile import shutil # move files import psutil # Load the Drive helper and mount from google.colab import drive drive.mount('/content/drive')
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Defines
ROOT_PATH = "drive/My Drive/TCC/" DATASETS_PATH = ROOT_PATH + "datasets/" SOYBEAN_PATH = DATASETS_PATH + "CBOTSoybeanFutures/" WEATHER_PATH = DATASETS_PATH + "GHCN_Data/" SOYBEAN_URL = "http://www.quandl.com/api/v3/datasets/CHRIS/CME_S1/data.csv" USDA_PATH = "datasets/USDA-NASS-soybeans-production_bushels-2015/" WEATHER_PATH_DRIVE_ZIP = WEATHER_PATH + "data/zip/" WEATHER_PATH_DRIVE_CSV = WEATHER_PATH + "data/csv/" FIXED_STATE_FILE = WEATHER_PATH + "fixed_states.txt" CALCULATED_STATE_FILE = WEATHER_PATH + "calculated_states.txt" DOWNLOADED_STATIONS_FILE = WEATHER_PATH + "downloaded_stations.txt" DOWNLOADED_STATIONS_FILE_TEMP = DOWNLOADED_STATIONS_FILE plt.rcParams["figure.figsize"] = [19,15] plt.rcParams.update({'font.size': 27}) # Create directories # and initial files if not os.path.exists(SOYBEAN_PATH): os.makedirs(SOYBEAN_PATH) if not os.path.exists(WEATHER_PATH_DRIVE_ZIP): os.makedirs(WEATHER_PATH_DRIVE_ZIP) if not os.path.exists(WEATHER_PATH_DRIVE_CSV): os.makedirs(WEATHER_PATH_DRIVE_CSV) if not os.path.exists(DOWNLOADED_STATIONS_FILE): open(DOWNLOADED_STATIONS_FILE,'a').close() if not os.path.exists(DOWNLOADED_STATIONS_FILE_TEMP): open(DOWNLOADED_STATIONS_FILE_TEMP,'a').close() if not os.path.exists(FIXED_STATE_FILE): open(FIXED_STATE_FILE,'a').close() if not os.path.exists(CALCULATED_STATE_FILE): open(CALCULATED_STATE_FILE,'a').close()
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
ERROR: type should be string, got " https://github.com/aaronpenne/get_noaa_ghcn_data.git https://github.com/aaronpenne/get_noaa_ghcn_data/blob/master/get_station_id.py"
# -*- coding: utf-8 -*- """ Searches list of stations via user input to find the station ID. Author: Aaron Penne ------------------------------ Variable Columns Type ------------------------------ ID 1-11 Character LATITUDE 13-20 Real LONGITUDE 22-30 Real ELEVATION 32-37 Real STATE 39-40 Character NAME 42-71 Character GSN FLAG 73-75 Character HCN/CRN FLAG 77-79 Character WMO ID 81-85 Character ------------------------------ """ import sys import pandas as pd from ftplib import FTP import os output_dir = os.path.relpath('output') if not os.path.isdir(output_dir): os.mkdir(output_dir) ftp_path_dly = '/pub/data/ghcn/daily/' ftp_path_dly_all = '/pub/data/ghcn/daily/all/' ftp_filename = 'ghcnd-stations.txt' def connect_to_ftp(): ftp_path_root = 'ftp.ncdc.noaa.gov' # Access NOAA FTP server ftp = FTP(ftp_path_root) message = ftp.login() # No credentials needed print(message) return ftp def get_station_id(ftp, search_term): ''' Get stations file ''' ftp_full_path = os.path.join(ftp_path_dly, ftp_filename) local_full_path = os.path.join(output_dir, ftp_filename) if not os.path.isfile(local_full_path): with open(local_full_path, 'wb+') as f: ftp.retrbinary('RETR ' + ftp_full_path, f.write) ''' Get user search term ''' query = search_term query = query.upper() print("> Query: '"+query+"'") ''' Read stations text file using fixed-width-file reader built into pandas ''' # http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_fwf.html dtype = {'STATION_ID': str, 'LATITUDE': str, 'LONGITUDE': str, 'ELEVATION': str, 'STATE': str, 'STATION_NAME': str, 'GSN_FLAG': str, 'HCN_CRN_FLAG': str, 'WMO_ID': str} names = ['STATION_ID', 'LATITUDE', 'LONGITUDE', 'ELEVATION', 'STATE', 'STATION_NAME', 'GSN_FLAG', 'HCN_CRN_FLAG', 'WMO_ID'] widths = [11, # Station ID 9, # Latitude (decimal degrees) 10, # Longitude (decimal degrees) 7, # Elevation (meters) 3, # State (USA stations only) 31, # Station Name 4, # GSN Flag 4, # HCN/CRN Flag 6] # WMO ID df = pd.read_fwf(local_full_path, widths=widths, names=names, dtype=dtype, header=None) ''' Replace missing values (nan, -999.9) ''' df['STATE'] = df['STATE'].replace('nan', '--') df['GSN_FLAG'] = df['GSN_FLAG'].replace('nan', '---') df['HCN_CRN_FLAG'] = df['GSN_FLAG'].replace('nan', '---') df = df.replace(-999.9, float('nan')) try: ''' Get query results, but only the columns we care about ''' print('Searching records...') matches = df['STATION_ID'].str.contains(query) df = df.loc[matches, ['STATION_ID', 'LATITUDE', 'LONGITUDE', 'ELEVATION', 'STATE', 'STATION_NAME']] df.reset_index(drop=True, inplace=True) ''' Get file sizes of each station's records to augment results ''' #print('Getting file sizes...', end='') #print(df.index) #ftp.voidcmd('TYPE I') # Needed to avoid FTP error with ftp.size() #count=0 #last = '' #for i in list(df.index): # count = count + 1 # print('.', end='') # ftp_dly_file = ftp_path_dly + 'all/' + df.loc[i, 'STATION_ID'] + '.dly' # #print(df.loc[i, 'STATION_ID'], end='') # df.loc[i, 'SIZE'] = round(ftp.size(ftp_dly_file)/1000) # Kilobytes # #print('size: %d KB' %round(ftp.size(ftp_dly_file)/1000)) # actual = " %.1f%% " % ((count/df.index.size)*100) # if (actual != last): # clear_output() # last = actual # #print("%.2f%% " %((count/df.index.size)*100), end='') # print('Getting file sizes...') # print(str(actual) + ' ['+ str(count) + ' of ' + str(df.index.size) + ']') print() print() ''' Sort by size then by rounded lat/long values to group geographic areas and show stations with most data ''' df_sort = df.round(0) #df_sort.sort_values(['LATITUDE', 'LONGITUDE', 'SIZE'], ascending=False, inplace=True) df_sort.sort_values(['LATITUDE', 'LONGITUDE'], ascending=False, inplace=True) df = df.loc[df_sort.index] df.reset_index(drop=True, inplace=True) except: print('Station not found') traceback.print_exc(file=sys.stdout) ftp.quit() sys.exit() ''' Print headers and values to facilitate reading ''' #selection = 'Index' #station_id = 'Station_ID ' #lat = 'Latitude' #lon = 'Longitude' #state = 'State' #name = 'Station_Name ' #size = ' File_Size' # Format output to be pretty, hopefully there is a prettier way to do this. #print('{: <6}{: <31}{: <6}({: >8},{: >10}){: >13}'.format(selection, name, state, lat, lon, size)) #print('-'*5 + ' ' + '-'*30 + ' ' + '-'*5 + ' ' + '-'*21 + ' ' + '-'*12) #for i in list(df.index): # print('{: 4}: {: <31}{: <6}({: >8},{: >10}){: >10} Kb'.format(i, # df.loc[i,'STATION_NAME'], # df.loc[i,'STATE'], # df.loc[i,'LATITUDE'], # df.loc[i,'LONGITUDE'], # df.loc[i,'SIZE'])) # ''' # Get user selection # ''' # try: # query = input('Enter selection (ex. 001, 42): ') # query = int(query) # except: # print('Please enter valid selection (ex. 001, 42)') # ftp.quit() # sys.exit() #station_id = df.loc[query, 'STATION_ID'] station_id = df return station_id def get_station(ftp=None, search_term='US'): close_after = False if ftp==None: ftp = connect_to_ftp() close_after = True station_id = get_station_id(ftp,search_term) #print(station_id) if close_after: ftp.quit() return (station_id)
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
https://github.com/aaronpenne/get_noaa_ghcn_data/blob/master/get_dly.py
""" Grabs .dly file from the NOAA GHCN FTP server, parses, and reshapes to have one day per row and element values in the columns. Writes output as CSV. Author: Aaron Penne .dly Format In (roughly): .csv Format Out (roughly): ------------------------- -------------------------- Month1 PRCP Day1 Day2 ... Day31 Day1 PRCP SNOW Month1 SNOW Day1 Day2 ... Day31 Day2 PRCP SNOW Month2 PRCP Day1 Day2 ... Day31 Day3 PRCP SNOW Month2 SNOW Day1 Day2 ... Day31 Day4 PRCP SNOW Starting with 5 core elements (per README) PRCP = Precipitation (tenths of mm) SNOW = Snowfall (mm) SNWD = Snow depth (mm) TMAX = Maximum temperature (tenths of degrees C) TMIN = Minimum temperature (tenths of degrees C) ICD: ------------------------------ Variable Columns Type ------------------------------ ID 1-11 Character YEAR 12-15 Integer MONTH 16-17 Integer ELEMENT 18-21 Character VALUE1 22-26 Integer MFLAG1 27-27 Character QFLAG1 28-28 Character SFLAG1 29-29 Character VALUE2 30-34 Integer MFLAG2 35-35 Character QFLAG2 36-36 Character SFLAG2 37-37 Character . . . . . . . . . VALUE31 262-266 Integer MFLAG31 267-267 Character QFLAG31 268-268 Character SFLAG31 269-269 Character ------------------------------ """ import pandas as pd from ftplib import FTP from io import StringIO import os ftp_path_dly_all = '/pub/data/ghcn/daily/all/' def connect_to_ftp(): """ Get FTP server and file details """ ftp_path_root = 'ftp.ncdc.noaa.gov' # Access NOAA FTP server ftp = FTP(ftp_path_root) message = ftp.login() # No credentials needed #print(message) return ftp # Marlon Franco def disconnect_to_ftp(ftp_connection): return ftp_connection.quit() def get_flags(s): """ Get flags, replacing empty flags with '_' for clarity (' S ' becomes '_S_') """ m_flag = s.read(1) m_flag = m_flag if m_flag.strip() else '_' q_flag = s.read(1) q_flag = q_flag if q_flag.strip() else '_' s_flag = s.read(1) s_flag = s_flag if s_flag.strip() else '_' return [m_flag + q_flag + s_flag] def create_dataframe(element, dict_element): """ Make dataframes out of the dicts, make the indices date strings (YYYY-MM-DD) """ element = element.upper() df_element = pd.DataFrame(dict_element) # Add dates (YYYY-MM-DD) as index on df. Pad days with zeros to two places df_element.index = df_element['YEAR'] + '-' + df_element['MONTH'] + '-' + df_element['DAY'].str.zfill(2) df_element.index.name = 'DATE' # Arrange columns so ID, YEAR, MONTH, DAY are at front. Leaving them in for plotting later - https://stackoverflow.com/a/31396042 for col in ['DAY', 'MONTH', 'YEAR', 'ID']: df_element = move_col_to_front(col, df_element) # Convert numerical values to float df_element.loc[:,element] = df_element.loc[:,element].astype(float) return df_element def move_col_to_front(element, df): element = element.upper() cols = df.columns.tolist() cols.insert(0, cols.pop(cols.index(element))) df = df.reindex(columns=cols) return df def dly_to_csv(ftp, station_id, output_dir, save_dly): #output_dir = os.path.relpath('output') if not os.path.isdir(output_dir): os.makedirs(output_dir) ftp_filename = station_id + '.dly' # Write .dly file to stream using StringIO using FTP command 'RETR' s = StringIO() ftp.retrlines('RETR ' + ftp_path_dly_all + ftp_filename, s.write) s.seek(0) # Write .dly file to dir to preserve original # FIXME make optional? if (save_dly): with open(os.path.join(output_dir, ftp_filename), 'wb+') as f: ftp.retrbinary('RETR ' + ftp_path_dly_all + ftp_filename, f.write) # Move to first char in file s.seek(0) # File params num_chars_line = 269 num_chars_metadata = 21 element_list = ['PRCP', 'SNOW', 'SNWD', 'TMAX', 'TMIN'] ''' Read through entire StringIO stream (the .dly file) and collect the data ''' all_dicts = {} element_flag = {} prev_year = '0000' i = 0 while True: i += 1 ''' Read metadata for each line (one month of data for a particular element per line) ''' id_station = s.read(11) year = s.read(4) month = s.read(2) day = 0 element = s.read(4) # If this is blank then we've reached EOF and should exit loop if not element: break ''' Print status ''' if year != prev_year: #print('Year {} | Line {}'.format(year, i)) prev_year = year ''' Loop through each day in rest of row, break if current position is end of row ''' while s.tell() % num_chars_line != 0: day += 1 # Fill in contents of each dict depending on element type in current row if day == 1: try: first_hit = element_flag[element] except: element_flag[element] = 1 all_dicts[element] = {} all_dicts[element]['ID'] = [] all_dicts[element]['YEAR'] = [] all_dicts[element]['MONTH'] = [] all_dicts[element]['DAY'] = [] all_dicts[element][element.upper()] = [] all_dicts[element][element.upper() + '_FLAGS'] = [] value = s.read(5) flags = get_flags(s) if value == '-9999': continue all_dicts[element]['ID'] += [station_id] all_dicts[element]['YEAR'] += [year] all_dicts[element]['MONTH'] += [month] all_dicts[element]['DAY'] += [str(day)] all_dicts[element][element.upper()] += [value] all_dicts[element][element.upper() + '_FLAGS'] += flags ''' Create dataframes from dict ''' all_dfs = {} for element in list(all_dicts.keys()): all_dfs[element] = create_dataframe(element, all_dicts[element]) ''' Combine all element dataframes into one dataframe, indexed on date. ''' # pd.concat automagically aligns values to matching indices, therefore the data is date aligned! list_dfs = [] for df in list(all_dfs.keys()): list_dfs += [all_dfs[df]] df_all = pd.concat(list_dfs, axis=1, sort=False) df_all.index.name = 'MM/DD/YYYY' ''' Remove duplicated/broken columns and rows ''' # https://stackoverflow.com/a/40435354 df_all = df_all.loc[:,~df_all.columns.duplicated()] df_all = df_all.loc[df_all['ID'].notnull(), :] ''' Output to CSV, convert everything to strings first ''' # NOTE: To open the CSV in Excel, go through the CSV import wizard, otherwise it will come out broken df_out = df_all.astype(str) df_out.to_csv(os.path.join(output_dir, station_id + '.csv')) #print('\nOutput CSV saved to: {}'.format(os.path.join(output_dir, station_id + '.csv'))) def get_weather_data(ftp=None, station_id='USR0000CCHC',output_dir=WEATHER_PATH, save_dly=False): close_after = False if ftp==None: ftp = connect_to_ftp() close_after = True dly_to_csv(ftp, station_id,output_dir, save_dly) if close_after: ftp.quit()
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Fetch Data
def fetch_soybean_data(soybean_url=SOYBEAN_URL, soybean_path=SOYBEAN_PATH): if not os.path.isdir(soybean_path): os.makedirs(soybean_path) csv_path = os.path.join(soybean_path, "soybeans.csv") urllib.request.urlretrieve(soybean_url, csv_path) def fetch_weather_data(contains='US', weather_path=WEATHER_PATH_DRIVE_CSV, save_dly=False, how_much=100): conn = connect_to_ftp() weather = get_station(conn,search_term=contains) # List all stations from USA downloaded_stations = "" with open(DOWNLOADED_STATIONS_FILE_TEMP,"r+") as f: downloaded_stations = f.read() count = 0 count2 = 0 total = weather['STATION_ID'].size amount_of_data = total * how_much/100 last = '' for station in weather['STATION_ID']: print('.',end='') count += 1 actual = "%.2f%% " %((count/total)*100) actual_partial = "%.2f%% " %((count2/amount_of_data)*100) if (station+'.csv' not in downloaded_stations): if (count2 > amount_of_data): print('download completed: ['+str(count2)+' of '+str(amount_of_data)+'], total = '+str(total)) return True count2 += 1 print('get ', end='') get_weather_data(conn, station, weather_path, save_dly) print('done') downloaded_stations += station+'.csv\r\n' with open(DOWNLOADED_STATIONS_FILE_TEMP,"a+") as f: f.write(station+'.csv\r\n') else: print(',',end='') if (actual != last): clear_output() last = actual print('Getting '+str(how_much)+'% of weather data from GHCN ftp containing \''+contains+'\' in STATION_ID...') print('PARTIAL: '+str(actual_partial) + ' ['+ str(count2) + ' of ' + str(amount_of_data) + ']') print('TOTAL: '+str(actual) + ' ['+ str(count) + ' of ' + str(total) + ']') disconnect_to_ftp(conn) print('Final: download completed: ['+str(count2)+' of '+str(amount_of_data)+'], total = '+str(total)) return True # Update the local temp control file #!echo "$DOWNLOADED_STATIONS_FILE" > "$DOWNLOADED_STATIONS_FILE_TEMP" . #fetch_weather_data(how_much=0.54) #0.54% of total amount of data fetch_weather_data() # Update the control file !echo "$DOWNLOADED_STATIONS_FILE_TEMP" > "$DOWNLOADED_STATIONS_FILE" .
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Check the number of downloaded station's csv file
weather = get_station(search_term='US') # List all stations from USA print("# of stations in GHCN FTP: ", end="") print(str(weather['STATION_ID'].size)) print("# of downloaded csv files: ", end="") !find "$WEATHER_PATH_DRIVE_CSV" -type f | wc -l print("# of downloaded stations in control file: ", end="") with open(DOWNLOADED_STATIONS_FILE) as f: num_lines = sum(1 for _ in f.readlines()) print(str(num_lines)) def force_update_control_file(): directory = os.path.join(WEATHER_PATH_DRIVE_CSV) with open(DOWNLOADED_STATIONS_FILE,"w") as f: for root,dirs,files in os.walk(directory): for file in files: print('.',end='') if file.endswith(".csv"): f.write(file+'\r\n') force_update_control_file()
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Get 'US' Stations
newfile = '' with open(PROJECT_PATH+'ghcnd-stations-us.txt', 'r') as f: for line in f.readlines(): line_list = line.split(' ') station = line_list[0] newfile += station for word in line_list: if (len(word) > 1): if (word[0].isalpha() and word!=station): state = word newfile += ','+state+'\n' break print(newfile) with open(PROJECT_PATH+'ghcnd-stations-us.csv', 'w+') as f: f.write(newfile)
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Organize Stations by State
def organize_stations_by_state(): f1='' #stations_not_dowloaded csv_path = WEATHER_PATH_DRIVE_CSV with open(WEATHER_PATH+'ghcnd-stations-us.csv', 'r') as f: for line in f: station = line.split(',')[0] state = line.split(',')[1].rstrip() # Create target Directory if don't exist if not os.path.exists(csv_path+state): os.mkdir(csv_path+state) print("Directory " , csv_path+state , " Created ") #else: # print("Directory " , "csv/"+state , " already exists") if not os.path.exists(csv_path+station+".csv"): print(".", end='') f1+=station+"\n" else: os.rename(csv_path+station+".csv", csv_path+state+"/"+station+".csv") with open(WEATHER_PATH+'stations_not_dowloaded.csv', 'w+') as f: f.write(f1) !ls drive/My\ Drive/TCC/ sLength = len(df1['TMAX']) df1['e'] = p.Series(np.random.randn(sLength), index=df1.index)
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Fix columns
def fix_columns(df): for column in df: if column in ('ID','TMAX','TMIN','TAVG','PRCP'): pass else: #print(' deleting ',column, end='') del(df[column]) if 'TMAX' not in df: #print(' creating TMAX... ', end='') #sLength = sizes['TMAX'] df['TMAX'] = pd.Series(0, index=df.index) if 'TMIN' not in df: #print(' creating TMIN... ', end='') #sLength = sizes['TMIN'] df['TMIN'] = pd.Series(0, index=df.index) if 'TAVG' not in df: #print(' creating TAVG... ', end='') #sLength = sizes['TAVG'] df['TAVG'] = pd.Series(0, index=df.index) if 'PRCP' not in df: #print(' creating PRCP... ') #sLength = sizes['PRCP'] df['PRCP'] = pd.Series(0, index=df.index) df=df.fillna(method='ffill') df_ref = load_single_csv(CSV_PATH+'WA/USS0017B04S.csv') sizes = {'TMAX':len(df_ref['TMAX']),'TMIN':len(df_ref['TMIN']),'TAVG':len(df_ref['TAVG']),'PRCP':len(df_ref['PRCP'])} def fix_dataframes(folder=''): root_path = CSV_PATH+folder+'/' print(root_path) count=0 count2=10 total=0 for root, dirs, files in os.walk(root_path): total=len(files) for file in files: if '.csv' in file: station=file.strip('.csv') #print(station) path = os.path.join(root, file) df = load_single_csv(path) fix_columns(df) new_path = os.path.join(PROJECT_PATH+'new/'+folder+'/', file) # Create target Directory if don't exist if not os.path.exists(PROJECT_PATH+'new/'+folder+'/'): os.mkdir(PROJECT_PATH+'new/'+folder+'/') print("Directory " , PROJECT_PATH+'new/'+folder+'/' , " Created ") df.to_csv(new_path) if count2 == 70: count2=0 actual = "%.2f%% " %((count/total)*100) clear_output() print('Fixing ',folder,' stations... ',actual,' (',str(count),' of ',str(total),')') count+=1 count2+=1 print('Done: %.2f%% ' %((count/total)*100)) return True fixed_states = "" with open(FIXED_STATE_FILE, "r+") as f: fixed_states = f.read() print('Already fixed:',fixed_states) for root, dirs, files in os.walk(CSV_PATH): total=len(dirs) for state in dirs: if (state not in fixed_states): if(fix_dataframes(state)): fixed_states+= state with open(FIXED_STATE_FILE,"a") as f: f.write(state+'\r\n') fix_dataframes('CA') df1 = load_single_csv('drive/My Drive/TCC/datasets/GHCN_Data/data/csv/TX/US1TXAC0002.csv') df2 = load_single_csv('drive/My Drive/TCC/datasets/GHCN_Data/data/new/TX/US1TXAC0002.csv') df1.tail() df2.tail()
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Load Data
def load_soybean_data(soybean_path=SOYBEAN_PATH): csv_path = os.path.join(soybean_path, "soybeans.csv") print(csv_path) return pd.read_csv(csv_path) def load_single_csv(csv_path): #print(csv_path) df = pd.read_csv(csv_path,low_memory=False) df.set_index(['MM/DD/YYYY','YEAR','MONTH','DAY'], inplace=True) return df def extract_zip(dir_name=WEATHER_PATH_DRIVE_ZIP,destination_dir=WEATHER_PATH_DRIVE_CSV): for item in os.listdir(dir_name): # loop through items in dir if item.endswith(".zip"): # check for ".zip" extension print("Extracting "+str(item), end="") #file_name = os.path.abspath(item) # get full path of files file_name = dir_name+item # get full path of files zip_ref = zipfile.ZipFile(file_name) # create zipfile object zip_ref.extractall(destination_dir) # extract file to dir zip_ref.close() # close file print("... OK!") #os.remove(file_name) # delete zipped file print("Extraction complete!") def load_weather_data(weather_path=WEATHER_PATH_DRIVE_CSV,from_zip=False): if from_zip: extract_zip() data_frames=[] #first=True directory = os.path.join(weather_path,"") print(directory) for root,dirs,files in os.walk(directory): print(directory+".") for file in files: print(".") if file.endswith(".csv"): csv_path = os.path.join(weather_path, file) df = load_single_csv(csv_path) #Rename Columns #df=df.drop(columns=['ID']) #station = file.replace('.csv','') #for column in df.columns: # if(column not in ['MM/DD/YYYY','YEAR','MONTH','DAY']): # df.rename(columns={column: station +'-'+ column}, inplace=True) #print(station +'-'+ column) #Append to list data_frames.append(df) #if (first): # data_frames = df # first=False #else: # data_frames = pd.merge(data_frames, df, on=['MM/DD/YYYY','YEAR','MONTH','DAY'], how='left') return data_frames #return pd.concat(data_frames, axis=1) def load_usda_data(usda_path=USDA_PATH): csv_path = os.path.join(usda_path, "data.csv") print(csv_path) return pd.read_csv(csv_path, thousands=',')
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Calculate Mean and Standard Deviation for each state
def save_csv(df,name,path): #print('Saving DataFrame in ',path) # Create target Directory if don't exist if not os.path.exists(path): os.mkdir(path) print("Directory " , path , " Created ") df.to_csv(path+name) def read_log(file_path): files_processed = "" if not os.path.exists(file_path): with open(file_path,'w+') as f: files_processed = f.read() else : with open(file_path,'r+') as f: files_processed = f.read() return files_processed def write_log(file_path,content): with open(file_path,'w+') as f: f.write(content) def calculate(daf): print('TMAX',end='') daf['TMAX_mean'] = daf[[col for col in daf.columns if 'TMAX' in col ]].mean(1) print(' Ok mean') daf['TMAX_std'] = daf[[col for col in daf.columns if 'TMAX' in col ]].std(1) print(' Ok std') #daf = daf.drop(columns=['TMAX']) print(' OK\nTMIN', end='') daf['TMIN_mean'] = daf[[col for col in daf.columns if 'TMIN' in col ]].mean(1) print(' Ok mean') daf['TMIN_std'] = daf[[col for col in daf.columns if 'TMIN' in col ]].std(1) print(' Ok std') #daf = daf.drop(columns=['TMIN']) print(' OK\nTAVG', end='') daf['TAVG_mean'] = daf[[col for col in daf.columns if 'TAVG' in col ]].mean(1) print(' Ok mean') daf['TAVG_std'] = daf[[col for col in daf.columns if 'TAVG' in col ]].std(1) print(' Ok std') #daf = daf.drop(columns=['TAVG']) print(' OK\nPRCP', end='') daf['PRCP_mean'] = daf[[col for col in daf.columns if 'PRCP' in col ]].mean(1) print(' Ok mean') daf['PRCP_std'] = daf[[col for col in daf.columns if 'PRCP' in col ]].std(1) print(' Ok std') #daf = daf.drop(columns=['PRCP']) daf = daf.drop(columns=[col for col in daf.columns if col not in ['MM/DD/YYYY','YEAR','MONTH','DAY','TMAX_mean','TMAX_std','TMIN_mean','TMIN_std','TAVG_mean','TAVG_std','PRCP_mean','PRCP_std']]) print(' OK') daf=daf.fillna(0) return daf def calculate_mean(folder=''): root_path = WEATHER_PATH_DRIVE_CSV+folder+'/' new_path = os.path.join(WEATHER_PATH+'new/'+folder+'/','') file_path = new_path+folder+'.txt' if not os.path.exists(new_path): os.mkdir(new_path) print("Directory " , new_path , " Created ") files_processed = read_log(file_path) print(root_path) n=0 count=0 count2=70 count_to_save=0 count_to_reset=0 total=0 already_readed=False for root, dirs, files in os.walk(root_path): total=len(files) for file in files: if '.csv' in file: station=file.strip('.csv') if (station not in files_processed): path = os.path.join(root, file) df = load_single_csv(path) df = df.drop(columns=['ID']) if not already_readed: try: daf = load_single_csv(new_path+folder+'_tmp.csv') except: daf = df already_readed=True daf = pd.concat([daf,df], axis=1) if count_to_save == 100: count_to_save=0 print('saving') save_csv(daf,folder+'_tmp',new_path) write_log(file_path,files_processed) print('saved') count_to_save+=1 files_processed+=station+'\r\n' del df if count2 == 70: count2=0 actual = "%.2f%% " %((count/total)*100) clear_output() process = psutil.Process(os.getpid()) print('RAM usage: %.2f GB' %((process.memory_info().rss) / 1e9)) print('Loading ',folder,' stations in DataFrames... ',actual,' (',str(count),' of ',str(total),')') count+=1 count2+=1 save_csv(daf,folder+'_tmp',new_path) write_log(file_path,files_processed) print('Load done: %.2f%% ' %((count/total)*100)) if("Done" not in files_processed): daf = load_single_csv(new_path+folder+'_tmp.csv') daf = calculate(daf) new_file_name = state+str(n)+'.csv' print('Saving ', new_file_name) save_csv(daf,new_file_name,new_path) print('Done saving ',new_file_name) write_log(file_path,files_processed+"Done\r\n") print('Done!') os.remove(new_path+folder+'_tmp.csv') else : print('Already processed.') return True calculate_mean('FL') calculated_states = "" with open(CALCULATED_STATE_FILE, "r+") as f: calculated_states = f.read() print('Already calculated:\n[\n',calculated_states,']\n') for root, dirs, files in os.walk(WEATHER_PATH_DRIVE_CSV): total=len(dirs) for state in dirs: if (state not in calculated_states): if(calculate_mean(state)): calculated_states+= state with open(CALCULATED_STATE_FILE,"a") as f: f.write(state+'\r\n')
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Make the Date column as index
soybeans.Date = pd.to_datetime(soybeans.Date) soybeans.set_index('Date', inplace=True) soybeans.head() soybeans.tail() plt.plot(soybeans.index, soybeans.Settle) plt.title('CBOT Soybean Futures',fontsize=27) plt.ylabel('Price (0.01 $USD)',fontsize=27) plt.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%d')) plt.show() usda = load_usda_data()
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Filter soybeans by the year 2015:
mask = (soybeans['Date'] > '2015-01-01') & (soybeans['Date'] <= '2015-12-31') soybeans = soybeans.loc[mask] mask = (soybeans['Date'] > '2014-01-01') soybeans = soybeans.loc[mask]
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Filter weather by the most productive states
weather = weather.query("state in ('IA','IL','MN','NE','IN','OH','SD','ND','MO','AR','KS','MS','MI','WI','KY','TN','LA','NC','PA','VA','MD','AL','GA','NY','OK','SC','DE','NJ','TX','WV','FL')")
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Plot map data
stations = pd.read_csv('US_stations.csv') #stations.set_index(['LATITUDE','LONGITUDE'], inplace=True) stations.index.names #stations.drop_duplicates(subset=['LATITUDE','LONGITUDE']) stations.plot(kind="scatter", x="LONGITUDE", y="LATITUDE",fontsize=27,figsize=(20,15)) plt.title("Meteorological stations in the USA's most soy producing regions", fontsize=27) plt.gca().yaxis.set_major_locator(plt.NullLocator()) plt.gca().xaxis.set_major_formatter(plt.NullFormatter()) plt.axis('off') plt.show() weather.drop_duplicates(subset=['latitude','longitude']).plot(kind="scatter", x="longitude", y="latitude",fontsize=27,figsize=(20,15)) plt.title("Meteorological stations in the USA's most soy producing regions", fontsize=27) plt.gca().yaxis.set_major_locator(plt.NullLocator()) plt.gca().xaxis.set_major_formatter(plt.NullFormatter()) plt.axis('off') plt.show()
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Group data by date (daily)Mรฉdia das medidas horรกrias para o avgtemp, mintemp e maxtemp
weather = weather.groupby(['date'], as_index=False)['date','mintemp','maxtemp','avgtemp'].mean() weather.head() weather.date = pd.to_datetime(weather.date) weather.set_index('date', inplace=True)
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Join datasets (soybeans + weather)
dtMarlon = soybeans.join(weather) dtMarlon.head()
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Histograms
soybeans.hist(bins=50, figsize=(20,15)) plt.show() weather.hist(bins=50, figsize=(20,15)) plt.show() dtMarlon.hist(bins=50, figsize=(20,15)) plt.show()
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Time Series
plt.plot(soybeans.index, soybeans.Settle) plt.title('CBOT Soybean Futures',fontsize=27) plt.ylabel('Price (0.01 $USD)',fontsize=27) plt.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%d')) plt.show() plt.plot(weather.index, weather.avgtemp) plt.title('2015 USA Weather Avg, Max, Min') plt.ylabel('Avg. Temp. (ยฐF)'); plt.show() fig = plt.figure() ax1 = fig.add_subplot(111) ax1.plot(dtMarlon.index, dtMarlon.avgtemp, 'g-') ax1.set_ylabel('Avg. Temp. (ยฐF)') ax2 = ax1.twinx() ax2.plot(dtMarlon.index, dtMarlon.Settle, 'b-') ax2.set_ylabel('Price per bushel (0.01 $USD)') ax2.yaxis.set_major_formatter(mticker.FormatStrFormatter('%0.01d')) plt.title('2015 USA Weather Avg and CBOT Soybean Futures') plt.show()
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Missing values for weather in days where we have soy quotes
dtMarlon.query('avgtemp.isnull()') weather.query("date=='2015-06-05' or date=='2015-06-04' or date=='2015-06-03' or date=='2015-06-02' or date=='2015-06-01'") soybeans.query("Date=='2015-06-05' or Date=='2015-06-04' or Date=='2015-06-03' or Date=='2015-06-02' or Date=='2015-06-01'")
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Filling missing values with method 'ffil'This propagate non-null values forward
dtMarlon = dtMarlon.fillna(method='ffill')
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
Correlation
dtMarlon.corr() dtMarlon.diff().corr() pd.plotting.autocorrelation_plot(dtMarlon)
_____no_output_____
Apache-2.0
Create_Dataset_MarlonFranco.ipynb
marlonrcfranco/DatasetMarlon
numpy- ํ–‰๋ ฌ / ์„ ํ˜•๋Œ€์ˆ˜ / ํ†ต๊ณ„ ํŒจํ‚ค์ง€- ๋จธ์‹ ๋Ÿฌ๋‹์˜ ์ด๋ก ์  ๋ฐฑ๊ทธ๋ผ์šด๋“œ๋Š” ์„ ํ˜•๋Œ€์ˆ˜์™€ ํ†ต๊ณ„๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค- ์‚ฌ์ดํ‚ท๋Ÿฐ ๊ฐ™์€ ๋จธ์‹ ๋Ÿฌ๋‹ ํŒจํ‚ค์ง€๊ฐ€ ๋„˜ํŒŒ์ด ๊ธฐ๋ฐ˜์œผ๋กœ ๋˜์–ด ์žˆ๋‹ค * ๋จธ์‹ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด๋‚˜ ์‚ฌ์ดํŒŒ์ด์™€ ๊ฐ™์€ ๊ณผํ•™, ํ†ต๊ณ„ ์ง€์›์šฉ ํŒจํ‚ค์ง€๋ฅผ ์ง์ ‘ ๋งŒ๋“œ๋Š” ๊ฐœ๋ฐœ์ด ์•„๋‹ˆ๋ผ๋ฉด ๋„˜ํŒŒ์ด๋ฅผ ์ƒ์„ธํ•˜๊ธฐ ์•Œ ํ•„์š”๋Š” ์—†๋‹ค์ง€๋งŒ, ๋„˜ํŒŒ์ด๋ฅผ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์ด ํŒŒ์ด์ฌ ๊ธฐ๋ฐ˜์˜ ๋ฐ์ดํƒ€๋ถ„์„๊ณผ ๋จธ์‹ ๋Ÿฌ๋‹์— ์ค‘์š”ํ•˜๋‹ค * ๋„˜ํŒŒ์ด๊ฐ€ ๋ฐ์ดํƒ€ ํ•ธ๋“ค๋ง์— ํšจ์œจ์ ์œผ๋กœ ์‰ฝ๊ณ  ํŽธํ•˜๊ณ  ํ•  ์ˆ˜ ์—†๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ฐ์ดํƒ€ ํ•ธ๋“ค๋ง์— ์ฃผ๋กœ ์‚ฌ์šฉํ•˜๋Š” ํŒ๋‹ค์Šค๋„ ๋งŽ์€ ๋ถ€๋ถ„์ด ๋„˜ํŒŒ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋งŒ๋“ค์–ด์ ธ ์žˆ๋‹ค. * ndarray - ๋„˜ํŒŒ์ด ๊ธฐ๋ฐ˜ ๋ฐ์ดํƒ€ ํƒ€์ž… - np.array() * ์ž๋ฃŒํ˜• ์ •๋ฆฌ - ํŒŒ์ด์ฌ (list / tuple) - numpy์˜ ndarray - pandas์˜ DataFrame / series
import numpy as np list_1 = [1, 2, 3] list_2 = [9, 8, 7] arr = np.array([list_1, list_2]) arr print(type(arr)) print(arr.shape) print(arr.ndim) # ๋‘๋ฒˆ์งธํ–‰์— ๋‘๋ฒˆ์งธ ์—ด์˜ ๊ฐ’์„ 100 ์ง€์ • arr[[1], 1] = 100 arr arr1 = np.array([1, 2, 3]) arr2 = np.array([[1, 2, 3]]) print(arr1.shape) # 1์ฐจ์› ํŠœํ”Œ๋กœ 3๊ฐœ์˜ ์š”์†Œ๋ฅผ ๊ฐ€์ง print(arr2.shape) # 1ํ–‰ 3์—ด์˜ ์š”์†Œ๋ฅผ ๊ฐ€์ง„ 2์ฐจ์› # ์ฐจ์›์ˆ˜ ํ™•์ธ print(arr1.ndim, "์ฐจ์›") print(arr2.ndim, "์ฐจ์›")
(3,) (1, 3) 1 ์ฐจ์› 2 ์ฐจ์›
MIT
jupyter/dAnalysis/b_numpy_class/Ex01_dnarray.ipynb
WoolinChoi/test
์ž๋ฃŒํ˜•
# ์ž๋ฃŒํ˜• ํ™•์ธ print(type(arr)) # ์š”์†Œ์˜ ์ž๋ฃŒํ˜• ํ™•์ธ print(arr.dtype) # ์š”์†Œ์˜ ์ž๋ฃŒํ˜• ๋ณ€๊ฒฝ arr2 = arr.astype(np.float64) print(arr2.dtype) list1 = [1, 2, 3.6] print(type(list1)) # ndarray ๋ณ€๊ฒฝ list1 = np.array(list1) print(type(list1)) # ์š”์†Œ list1.astype(np.float64) print(list1.dtype) list2 = [1, 2.3, 'python'] print(type(list2)) # ndarray ๋ณ€๊ฒฝ list2 = np.array(list2) print(type(list2)) # ์š”์†Œ print(list2.dtype) # U32 # [๊ฒฐ๊ณผ] <U11 : Unicode ๋ฌธ์ž์—ด
_____no_output_____
MIT
jupyter/dAnalysis/b_numpy_class/Ex01_dnarray.ipynb
WoolinChoi/test
nparray๋ฅผ ํŽธ๋ฆฌํ•˜๊ฒŒ ์ƒ์„ฑ* arange() : ๋ฒ”์œ„๋ฅผ ์ด์šฉํ•œ ๋ฐฐ์—ด ๋งŒ๋“ค๊ธฐ* zeros() : 0์œผ๋กœ ์ฑ„์šฐ๋Š” ๋ฐฐ์—ด ๋งŒ๋“ค๊ธฐ* ones() : 1๋กœ ์ฑ„์šฐ๋Š” ๋ฐฐ์—ด ๋งŒ๋“ค๊ธฐ
a = np.arange(10) print(a) b = np.arange(1, 11) print(b) c = np.arange(1, 11, 3) print(c) a2 = np.zeros((5, 5)) # ๊ธฐ๋ณธ์ž๋ฃŒํ˜•: float a2 a3 = np.ones((3, 4), dtype='int32') a3
_____no_output_____
MIT
jupyter/dAnalysis/b_numpy_class/Ex01_dnarray.ipynb
WoolinChoi/test
ndarray์˜ ์ฐจ์›๊ณผ ํฌ๊ธฐ๋ฅผ ๋ณ€๊ฒฝ : reshape()
arr = np.arange(10) print(arr) print(arr.shape) print(arr.ndim) # ์ฐจ์›์œผ๋กœ ํ™•์ธ ๊ถŒ์žฅ arr2 = arr.reshape(2, 5) # 2ํ–‰ 5์—ด๋กœ ์ฐจ์›ํฌ๊ธฐ ๋ณ€๊ฒฝ print(arr2) print(arr2.shape) print(arr2.ndim) # -1 ์ ์šฉ arr = np.arange(20) print(arr) arr2 = arr.reshape(5, -1) print(arr2) arr3 = arr.reshape(-1, 2) print(arr3)
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15] [16 17 18 19]] [[ 0 1] [ 2 3] [ 4 5] [ 6 7] [ 8 9] [10 11] [12 13] [14 15] [16 17] [18 19]]
MIT
jupyter/dAnalysis/b_numpy_class/Ex01_dnarray.ipynb
WoolinChoi/test
์ธ๋ฑ์‹ฑ: ํŠน์ • ๋ฐ์ดํƒ€ ์ถ”์ถœ
#------------------------------------------ (1) ๋‹จ์ผ๊ฐ’ ์ถœ๋ ฅ import numpy as np arr = np.arange(1, 11) print(arr) ## ์„ธ๋ฒˆ์งธ ์š”์†Œ ์ถ”์ถœ ( 0๋ถ€ํ„ฐ ์ธ๋ฑ์Šค) print(arr[3]) ## ๋’ค์—์„œ ์„ธ๋ฒˆ์งธ ์š”์†Œ ์ถ”์ถœ ( ๋’ค์—์„œ ์ธ๋ฑ์Šค๋Š” -1๋ถ€ํ„ฐ) print(arr[-3]) ## 1๋ถ€ํ„ฐ 9๊นŒ์ง€ nparray๋ฅผ ๋งŒ๋“ค๊ณ  3ํ–‰ 3์—ด 2์ฐจ์› ๊ตฌ์กฐ๋กœ ๋ณ€๊ฒฝํ•œํ›„ ## ๋‘๋ฒˆ์งธ ํ–‰์˜ ์„ธ๋ฒˆ์งธ ์—ด์˜ ๊ฐ’ ์ถ”์ถœ arr = np.arange(1, 10) print(arr) arr2 = arr.reshape(3, 3) print(arr2.ndim) print(arr2) print(arr2[[1], 2]) #------------------------------------------ (2) ์Šฌ๋ผ์ด์‹ฑ (:) arr = np.arange(1, 10) print(arr) # 2๋ฒˆ์งธ๋ถ€ํ„ฐ 4๋ฒˆ์งธ๊นŒ์ง€์˜ ์š”์†Œ ์ถ”์ถœ print(arr[1:4]) # 2๋ฒˆ์งธ๋ถ€ํ„ฐ ๋งˆ์ง€๋ง‰๊นŒ์ง€ ์š”์†Œ ์ถ”์ถœ print(arr[1:]) # ์ฒ˜์Œ๋ถ€ํ„ฐ 4๋ฒˆ์งธ๊นŒ์ง€ ์š”์†Œ ์ถ”์ถœ print(arr[0:4]) # 2์ฐจ์› ๋ฐฐ์—ด์—์„œ ์ƒ์„ฑ ''' 1 2 3 4 5 6 7 8 9 ''' arr = np.arange(1, 10) ndarr = arr.reshape(3,3) print(ndarr) ## ๊ทธ๋ฆผ์—์„œ 1, 2, 4, 5 ์ถ”์ถœ print(ndarr[[0, 1], :2]) ## ๊ทธ๋ฆผ์—์„œ 4, 5, 6, 7, 8, 9 ์ถ”์ถœ print(ndarr[[1, 2], :]) ## ๊ทธ๋ฆผ์—์„œ 2, 3, 5, 6 ์ถ”์ถœ print(ndarr[[0, 1], 1:]) ## ๊ทธ๋ฆผ์—์„œ 1, 4 ์ถ”์ถœ print(ndarr[[0, 1], :1]) ## ๊ทธ๋ฆผ์—์„œ ์ „์ฒด ์š”์†Œ ์ถ”์ถœ print(ndarr[:, :]) print(ndarr[::]) print(ndarr) # 2์ฐจ์› ndarray์—์„œ ๋’ค์˜ ์˜ค๋Š” ์ธ๋ฑ์Šค๊ฐ€ ์—†์œผ๋ฉด 1์ฐจ์›์œผ๋กœ ๋ฐ˜ํ™˜ # 3์ฐจ์› ndarray์—์„œ ๋’ค์˜ ์˜ค๋Š” ์ธ๋ฑ์Šค๊ฐ€ ์—†์œผ๋ฉด 2์ฐจ์›์œผ๋กœ ๋ฐ˜ํ™˜ print(ndarr[1][1]) print(ndarr[1]) # 1์ฐจ์› ndarray # ์Šฌ๋ผ์ด์‹ฑ[:]๊ณผ ์ธ๋ฑ์Šค์„ ํƒ์„ ๊ฒฐํ•ฉ ''' 1 2 3 4 5 6 7 8 9 ''' import numpy as np arr = np.arange(1, 10) ndarr = arr.reshape(3,3) print(ndarr) ## ๊ทธ๋ฆผ์—์„œ 1, 2, 4, 5 ์ถ”์ถœ print(ndarr[[0, 1], :2]) print(ndarr[:2, :2]) # print(ndarr[[0, 1], [0, 1]]) ## ๊ทธ๋ฆผ์—์„œ 4, 5, 6, 7, 8, 9 ์ถ”์ถœ print(ndarr[[1, 2], :3]) ## ๊ทธ๋ฆผ์—์„œ 2, 3, 5, 6 ์ถ”์ถœ print(ndarr[[0, 1], 1:3]) ## ๊ทธ๋ฆผ์—์„œ 1, 4 ์ถ”์ถœ print(ndarr[[0 ,1], :1]) ## ๊ทธ๋ฆผ์—์„œ 3, 6 ์ถ”์ถœ print(ndarr[[0, 1], 2:]) # ์ง€๊ธˆ์€ ๋„ˆ๋ฌด ๊ฐ„๋‹จํ•˜๋‹ค๊ณ  ์—ฌ๊ธฐ์ง€๋งŒ ์ถ”ํ›„์— ๋ฐ์ดํƒ€์—์„œ ๋งŽ์ด ํ—ท๊ฐˆ๋ ค์„œ ์—ฌ๊ธฐ์„œ ์‹œ๊ฐ„ ์†Œ์š”๋ฅผ ๋งŽ์ด ํ•œ๋‹ค #------------------------------------------ (3) ๋ธ”๋ฆฐ์ธ๋ฑ์‹ฑ # ์กฐ๊ฑด ํ•„ํ„ฐ๋ง๊ณผ ๊ฒ€์ƒ‰์„ ๊ฐ™์ด ํ•˜๊ธฐ์— ์ž์ฃผ ์‚ฌ์šฉ arr = np.arange(1, 10) ndarr = arr.reshape(3,3) print(ndarr) # 5๋ณด๋‹ค ํฐ ์š”์†Œ๋“ค ์ถ”์ถœ print(ndarr > 5) # 8๊ฐ’ ์š”์†Œ๋ฅผ 88๋กœ ๋ณ€๊ฒฝ ndarr[[2], 1] = 88 ndarr
[[1 2 3] [4 5 6] [7 8 9]] [[False False False] [False False True] [ True True True]]
MIT
jupyter/dAnalysis/b_numpy_class/Ex01_dnarray.ipynb
WoolinChoi/test
Hello PixieDust!This sample notebook provides you with an introduction to many features included in PixieDust. You can find more information about PixieDust at https://ibm-watson-data-lab.github.io/pixiedust/. To ensure you are running the latest version of PixieDust uncomment and run the following cell. Do not run this cell if you installed PixieDust locally from source and want to continue to run PixieDust from source.
#!pip install --user --upgrade pixiedust
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Import PixieDustRun the following cell to import the PixieDust library. You may need to restart your kernel after importing. Follow the instructions, if any, after running the cell. Note: You must import PixieDust every time you restart your kernel.
import pixiedust
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Enable the Spark Progress MonitorPixieDust includes a Spark Progress Monitor bar that lets you track the status of your Spark jobs. You can find more info at https://ibm-watson-data-lab.github.io/pixiedust/sparkmonitor.html. Run the following cell to enable the Spark Progress Monitor:
pixiedust.enableJobMonitor();
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Example use of the PackageManagerYou can use the PackageManager component of Pixiedust to install and uninstall maven packages into your notebook kernel without editing configuration files. This component is essential when you run notebooks from a hosted cloud environment and do not have access to the configuration files. You can find more info at https://ibm-watson-data-lab.github.io/pixiedust/packagemanager.html. Run the following cell to install the GraphFrame package. You may need to restart your kernel after installing new packages. Follow the instructions, if any, after running the cell.
pixiedust.installPackage("graphframes:graphframes:0.1.0-spark1.6") print("done")
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Run the following cell to print out all installed packages:
pixiedust.printAllPackages()
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Example use of the display() APIPixieDust lets you visualize your data in just a few clicks using the display() API. You can find more info at https://ibm-watson-data-lab.github.io/pixiedust/displayapi.html. The following cell creates a DataFrame and uses the display() API to create a bar chart:
sqlContext=SQLContext(sc) d1 = sqlContext.createDataFrame( [(2010, 'Camping Equipment', 3), (2010, 'Golf Equipment', 1), (2010, 'Mountaineering Equipment', 1), (2010, 'Outdoor Protection', 2), (2010, 'Personal Accessories', 2), (2011, 'Camping Equipment', 4), (2011, 'Golf Equipment', 5), (2011, 'Mountaineering Equipment',2), (2011, 'Outdoor Protection', 4), (2011, 'Personal Accessories', 2), (2012, 'Camping Equipment', 5), (2012, 'Golf Equipment', 5), (2012, 'Mountaineering Equipment', 3), (2012, 'Outdoor Protection', 5), (2012, 'Personal Accessories', 3), (2013, 'Camping Equipment', 8), (2013, 'Golf Equipment', 5), (2013, 'Mountaineering Equipment', 3), (2013, 'Outdoor Protection', 8), (2013, 'Personal Accessories', 4)], ["year","zone","unique_customers"]) display(d1)
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Example use of the Scala bridgeData scientists working with Spark may occasionaly need to call out to one of the hundreds of libraries available on spark-packages.org which are written in Scala or Java. PixieDust provides a solution to this problem by letting users directly write and run scala code in its own cell. It also lets variables be shared between Python and Scala and vice-versa. You can find more info at https://ibm-watson-data-lab.github.io/pixiedust/scalabridge.html. Start by creating a python variable that we'll use in scala:
python_var = "Hello From Python" python_num = 10
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Create scala code that use the python_var and create a new variable that we'll use in Python:
%%scala println(python_var) println(python_num+10) val __scala_var = "Hello From Scala"
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Use the __scala_var from python:
print(__scala_var)
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Sample DataPixieDust includes a number of sample data sets. You can use these sample data sets to start playing with the display() API and other PixieDust features. You can find more info at https://ibm-watson-data-lab.github.io/pixiedust/loaddata.html. Run the following cell to view the available data sets:
pixiedust.sampleData()
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust
Example use of sample dataTo use sample data locally run the following cell to install required packages. You may need to restart your kernel after running this cell.
pixiedust.installPackage("com.databricks:spark-csv_2.10:1.5.0") pixiedust.installPackage("org.apache.commons:commons-csv:0")
_____no_output_____
Apache-2.0
notebook/Intro to PixieDust.ipynb
jordangeorge/pixiedust