markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Add the Number of Rows to be considered as training data, test data and validation data in the input CSV which contains the training data | Trainrow = 1
testrow = 45
valrow = 49 | _____no_output_____ | MIT | Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb | gagan94/PreSumm |
Reading the input text from CSV file mentioned here. Change according to requirement. | import json
def format_to_lines():
p_ct = 0
i = 0
for i in range(Trainrow):
df_csv = pd.read_csv('./fincident.csv',encoding='cp1252')
dataset = []
d = _format_to_lines_new(df_csv,i)
dataset.append(d)
i += 1
pt_file = "{:s}.{:s}.{:d}.json".format('bert_data', "train", p_ct)
with open(pt_file, 'w') as save:
# save.write('\n'.join(dataset))
save.write(json.dumps(dataset))
p_ct += 1
print(p_ct)
dataset = []
REMAP = {"-lrb-": "(", "-rrb-": ")", "-lcb-": "{", "-rcb-": "}",
"-lsb-": "[", "-rsb-": "]", "``": '"', "''": '"'}
import re
def clean(x):
return re.sub(
r"-lrb-|-rrb-|-lcb-|-rcb-|-lsb-|-rsb-|``|''",
lambda m: REMAP.get(m.group()), x)
lower = True
def _format_to_lines_new(df_csv,i):
source = []
tgt = []
flag = False
print(df_csv['Texts'][i])
ann = client.annotate(str(df_csv['Texts'][i]),annotators='tokenize,ssplit', output_format='json' )
for sent in ann['sentences']:
tokens = [t['word'] for t in sent['tokens']]
if (lower):
tokens = [t.lower() for t in tokens]
if (tokens[0] == '@highlight'):
flag = True
continue
if (flag):
tgt.append(tokens)
flag = False
else:
source.append(tokens)
ann = client.annotate(str(df_csv['Analysis'][i]),annotators='tokenize,ssplit', output_format='json' )
for sent in ann['sentences']:
tokens = [t['word'] for t in sent['tokens']]
if (lower):
tokens = [t.lower() for t in tokens]
if (tokens[0] == '@highlight'):
flag = True
continue
if (flag):
tgt.append(tokens)
flag = False
else:
tgt.append(tokens)
source = [clean(' '.join(sent)).split() for sent in source]
tgt = [clean(' '.join(sent)).split() for sent in tgt]
return {'src': source, 'tgt': tgt}
from multiprocess import Pool
def format_to_bert(args):
for i in range(Trainrow):
a_lst = []
for json_f in glob.glob(pjoin(args.raw_path, '*' + corpus_type + '.*.json')):
real_name = json_f.split('/')[-1]
a_lst.append((json_f, args, pjoin(args.save_path, real_name.replace('json', 'bert.pt'))))
print(a_lst)
pool = Pool(args.n_cpus)
for d in pool.imap(_format_to_bert, a_lst):
pass
pool.close()
pool.join()
| _____no_output_____ | MIT | Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb | gagan94/PreSumm |
Genereate JSON file of the trianing data from th CSV | format_to_lines() | Hello Colleagues The solution creation is failing in QTC 906 with all the users with generic error message saying authorisation is missing. but with the same users the solutions were created previously also. NOTE : Even after the error message is shown the solution can be seen in solution explorer window but when clicked on it to sync then it says solution does not exist in the system. can you please check this ? This is blocking us for creating solution to test our automates. Regards Vishwanath A. Telsang
Hi Colleagues The issue is coming up because .myproj is not getting created in XREP. Please take over and check what can be done. Thanks Regards Sasi.
Hi I debugged the SDk and found out thar we call the PDI_1o_product_create to create product and after the product creation is done we make a separate call to generate solution file (.myproj .sln and .suo ) The load in system QTC 906 is too heavy. thera are in total 4700 solution present in the tenant. Since the system load is too high the solution creation is being timedout and we can make a backend call for creating these solution realted files. To resolve the issue we need to make system faster by removing the unwanted solution. To workaround: 1. Create a dummy solution in any tenant 2. Do a download as copy and deploy to QTC 906. 3. Create patch in 906 . 4. Use this solution for automate related task. Regards Nadeem
Hi I debugged the SDk and found out thar we call the PDI_1o_product_create to create product and after the product creation is done we make a separate call to generate solution file (.myproj .sln and .suo ) The load in system QTC 906 is too heavy. thera are in total 4700 solution present in the tenant. Since the system load is too high the solution creation is being timedout and we can make a backend call for creating these solution realted files. To resolve the issue we need to make system faster by removing the unwanted solution. To workaround: 1. Create a dummy solution in any tenant 2. Do a download as copy and deploy to QTC 906. 3. Create patch in 906 . 4. Use this solution for automate related task. Regards Nadeem
1
| MIT | Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb | gagan94/PreSumm |
Different Modules which are required below for Preprocessing | def _get_word_ngrams(n, sentences):
"""Calculates word n-grams for multiple sentences.
"""
assert len(sentences) > 0
assert n > 0
# words = _split_into_words(sentences)
words = sum(sentences, [])
# words = [w for w in words if w not in stopwords]
return _get_ngrams(n, words)
def _get_ngrams(n, text):
"""Calcualtes n-grams.
Args:
n: which n-grams to calculate
text: An array of tokens
Returns:
A set of n-grams
"""
ngram_set = set()
text_length = len(text)
max_index_ngram_start = text_length - n
for i in range(max_index_ngram_start + 1):
ngram_set.add(tuple(text[i:i + n]))
return ngram_set
def greedy_selection(doc_sent_list, abstract_sent_list, summary_size):
def _rouge_clean(s):
return re.sub(r'[^a-zA-Z0-9 ]', '', s)
max_rouge = 0.0
abstract = sum(abstract_sent_list, [])
abstract = _rouge_clean(' '.join(abstract)).split()
sents = [_rouge_clean(' '.join(s)).split() for s in doc_sent_list]
evaluated_1grams = [_get_word_ngrams(1, [sent]) for sent in sents]
reference_1grams = _get_word_ngrams(1, [abstract])
evaluated_2grams = [_get_word_ngrams(2, [sent]) for sent in sents]
reference_2grams = _get_word_ngrams(2, [abstract])
selected = []
for s in range(summary_size):
cur_max_rouge = max_rouge
cur_id = -1
for i in range(len(sents)):
if (i in selected):
continue
c = selected + [i]
candidates_1 = [evaluated_1grams[idx] for idx in c]
candidates_1 = set.union(*map(set, candidates_1))
candidates_2 = [evaluated_2grams[idx] for idx in c]
candidates_2 = set.union(*map(set, candidates_2))
rouge_1 = cal_rouge(candidates_1, reference_1grams)['f']
rouge_2 = cal_rouge(candidates_2, reference_2grams)['f']
rouge_score = rouge_1 + rouge_2
if rouge_score > cur_max_rouge:
cur_max_rouge = rouge_score
cur_id = i
if (cur_id == -1):
return selected
selected.append(cur_id)
max_rouge = cur_max_rouge
return sorted(selected)
def cal_rouge(evaluated_ngrams, reference_ngrams):
reference_count = len(reference_ngrams)
evaluated_count = len(evaluated_ngrams)
overlapping_ngrams = evaluated_ngrams.intersection(reference_ngrams)
overlapping_count = len(overlapping_ngrams)
if evaluated_count == 0:
precision = 0.0
else:
precision = overlapping_count / evaluated_count
if reference_count == 0:
recall = 0.0
else:
recall = overlapping_count / reference_count
f1_score = 2.0 * ((precision * recall) / (precision + recall + 1e-8))
return {"f": f1_score, "p": precision, "r": recall}
from pytorch_pretrained_bert import BertTokenizer | _____no_output_____ | MIT | Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb | gagan94/PreSumm |
Execute the below code for Preprocessing for PreSumm Only | # For PreSumm
class BertData():
def __init__(self):
self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
self.sep_token = '[SEP]'
self.cls_token = '[CLS]'
self.pad_token = '[PAD]'
self.tgt_bos = '[unused0]'
self.tgt_eos = '[unused1]'
self.tgt_sent_split = '[unused2]'
self.sep_vid = self.tokenizer.vocab[self.sep_token]
self.cls_vid = self.tokenizer.vocab[self.cls_token]
self.pad_vid = self.tokenizer.vocab[self.pad_token]
def preprocess(self, src, tgt, sent_labels, use_bert_basic_tokenizer=False, is_test=False):
if ((not is_test) and len(src) == 0):
print("returned none")
return None
original_src_txt = [' '.join(s) for s in src]
idxs = [i for i, s in enumerate(src) if (len(s) > 5)]
_sent_labels = [0] * len(src)
for l in sent_labels:
_sent_labels[l] = 1
src = [src[i][:200] for i in idxs]
sent_labels = [_sent_labels[i] for i in idxs]
src = src[:200]
sent_labels = sent_labels[:200]
if ((not is_test) and len(src) < 3):
print("returned none")
return None
src_txt = [' '.join(sent) for sent in src]
text = ' {} {} '.format(self.sep_token, self.cls_token).join(src_txt)
src_subtokens = self.tokenizer.tokenize(text)
src_subtokens = [self.cls_token] + src_subtokens + [self.sep_token]
src_subtoken_idxs = self.tokenizer.convert_tokens_to_ids(src_subtokens)
_segs = [-1] + [i for i, t in enumerate(src_subtoken_idxs) if t == self.sep_vid]
segs = [_segs[i] - _segs[i - 1] for i in range(1, len(_segs))]
segments_ids = []
for i, s in enumerate(segs):
if (i % 2 == 0):
segments_ids += s * [0]
else:
segments_ids += s * [1]
cls_ids = [i for i, t in enumerate(src_subtoken_idxs) if t == self.cls_vid]
sent_labels = sent_labels[:len(cls_ids)]
tgt_subtokens_str = '[unused0] ' + ' [unused2] '.join(
[' '.join(self.tokenizer.tokenize(' '.join(tt))) for tt in tgt]) + ' [unused1]'
tgt_subtoken = tgt_subtokens_str.split()[:200]
if ((not is_test) and len(tgt_subtoken) < 20):
print("returned none <20")
print(len(tgt_subtoken))
return None
tgt_subtoken_idxs = self.tokenizer.convert_tokens_to_ids(tgt_subtoken)
tgt_txt = '<q>'.join([' '.join(tt) for tt in tgt])
src_txt = [original_src_txt[i] for i in idxs]
return src_subtoken_idxs, sent_labels, tgt_subtoken_idxs, segments_ids, cls_ids, src_txt, tgt_txt | _____no_output_____ | MIT | Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb | gagan94/PreSumm |
Execute the below code for Preprocessing for BertSumm Only | # For BertSum
class BertData():
def __init__(self):
self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
self.sep_vid = self.tokenizer.vocab['[SEP]']
self.cls_vid = self.tokenizer.vocab['[CLS]']
self.pad_vid = self.tokenizer.vocab['[PAD]']
def preprocess(self, src, tgt, oracle_ids):
if (len(src) == 0):
return None
original_src_txt = [' '.join(s) for s in src]
labels = [0] * len(src)
for l in oracle_ids:
labels[l] = 1
idxs = [i for i, s in enumerate(src) if (len(s) > 5)]
src = [src[i][:200] for i in idxs]
labels = [labels[i] for i in idxs]
src = src[:200]
labels = labels[:200]
print("length of label")
print(len(labels))
if (len(src) < 10):
return None
if (len(labels) == 0):
return None
src_txt = [' '.join(sent) for sent in src]
# text = [' '.join(ex['src_txt'][i].split()[:self.args.max_src_ntokens]) for i in idxs]
# text = [_clean(t) for t in text]
text = ' [SEP] [CLS] '.join(src_txt)
src_subtokens = self.tokenizer.tokenize(text)
src_subtokens = src_subtokens[:510]
src_subtokens = ['[CLS]'] + src_subtokens + ['[SEP]']
src_subtoken_idxs = self.tokenizer.convert_tokens_to_ids(src_subtokens)
_segs = [-1] + [i for i, t in enumerate(src_subtoken_idxs) if t == self.sep_vid]
segs = [_segs[i] - _segs[i - 1] for i in range(1, len(_segs))]
segments_ids = []
for i, s in enumerate(segs):
if (i % 2 == 0):
segments_ids += s * [0]
else:
segments_ids += s * [1]
cls_ids = [i for i, t in enumerate(src_subtoken_idxs) if t == self.cls_vid]
labels = labels[:len(cls_ids)]
tgt_txt = '<q>'.join([' '.join(tt) for tt in tgt])
src_txt = [original_src_txt[i] for i in idxs]
return src_subtoken_idxs, labels, segments_ids, cls_ids, src_txt, tgt_txt
| _____no_output_____ | MIT | Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb | gagan94/PreSumm |
Execute the below code for Preprocessing for PreSumm Only | ## For PreSumm Only
def _format_to_bert():
p_ct = 0
for i in range(Trainrow):
pt_file = "{:s}.{:s}.{:d}.json".format('bert_data', "train", p_ct)
json_file = pt_file
save_file = pt_file.replace('.json','.pt')
p_ct += 1
bert = BertData()
jobs = json.load(open(json_file))
datasets = []
for d in jobs:
source, tgt = d['src'], d['tgt']
print("inside")
oracle_ids = greedy_selection(source, tgt, 3)
#elif (args.oracle_mode == 'combination'):
# oracle_ids = combination_selection(source, tgt, 3)
print(oracle_ids)
b_data = bert.preprocess(source, tgt, oracle_ids)
if (b_data is None):
print("None")
continue
src_subtoken_idxs, sent_labels, tgt_subtoken_idxs, segments_ids, cls_ids, src_txt, tgt_txt = b_data
#for BertSumm
#indexed_tokens, labels, segments_ids, cls_ids, src_txt, tgt_txt = b_data
# for BertSumm
#b_data_dict = {"src": indexed_tokens, "labels": labels, "segs": segments_ids, 'clss': cls_ids,
# 'src_txt': src_txt, "tgt_txt": tgt_txt}
b_data_dict = {"src": src_subtoken_idxs, "tgt": tgt_subtoken_idxs,
"src_sent_labels": sent_labels, "segs": segments_ids, 'clss': cls_ids,
'src_txt': src_txt, "tgt_txt": tgt_txt}
print(b_data_dict)
datasets.append(b_data_dict)
torch.save(datasets, save_file)
datasets = []
gc.collect() | _____no_output_____ | MIT | Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb | gagan94/PreSumm |
Execute the below code for Preprocessing for PreSumm Only | ## For BertSumm Only
def _format_to_bert():
p_ct = 0
for i in range(Trainrow):
pt_file = "{:s}.{:s}.{:d}.json".format('bert_data', "train", p_ct)
json_file = pt_file
save_file = pt_file.replace('.json','.pt')
p_ct += 1
bert = BertData()
jobs = json.load(open(json_file))
datasets = []
for d in jobs:
source, tgt = d['src'], d['tgt']
print("inside")
oracle_ids = greedy_selection(source, tgt, 3)
#elif (args.oracle_mode == 'combination'):
# oracle_ids = combination_selection(source, tgt, 3)
print(oracle_ids)
b_data = bert.preprocess(source, tgt, oracle_ids)
if (b_data is None):
print("None")
continue
#for BertSumm
indexed_tokens, labels, segments_ids, cls_ids, src_txt, tgt_txt = b_data
# for BertSumm
b_data_dict = {"src": indexed_tokens, "labels": labels, "segs": segments_ids, 'clss': cls_ids,
'src_txt': src_txt, "tgt_txt": tgt_txt}
print(b_data_dict)
datasets.append(b_data_dict)
torch.save(datasets, save_file)
datasets = []
gc.collect()
_format_to_bert() | inside
[0, 18, 22]
{'src': [101, 7592, 8628, 1996, 5576, 4325, 2003, 7989, 1999, 1053, 13535, 3938, 2575, 2007, 2035, 1996, 5198, 2007, 12391, 7561, 4471, 3038, 3166, 6648, 2003, 4394, 1012, 102, 101, 2021, 2007, 1996, 2168, 5198, 1996, 7300, 2020, 2580, 3130, 2036, 1012, 102, 101, 3602, 1024, 2130, 2044, 1996, 7561, 4471, 2003, 3491, 1996, 5576, 2064, 2022, 2464, 1999, 5576, 10566, 3332, 2021, 2043, 13886, 2006, 2009, 2000, 26351, 2059, 2009, 2758, 5576, 2515, 2025, 4839, 1999, 1996, 2291, 1012, 102, 101, 2064, 2017, 3531, 4638, 2023, 1029, 102, 101, 2023, 2003, 10851, 2149, 2005, 4526, 5576, 2000, 3231, 2256, 8285, 15416, 1012, 102, 101, 12362, 25292, 18663, 16207, 1037, 1012, 10093, 8791, 2290, 7632, 8628, 1996, 3277, 2003, 2746, 2039, 2138, 1012, 102, 101, 2026, 21572, 3501, 2003, 2025, 2893, 2580, 1999, 1060, 2890, 2361, 1012, 102, 101, 3531, 2202, 2058, 1998, 4638, 2054, 2064, 2022, 2589, 1012, 102, 101, 7632, 1045, 2139, 8569, 15567, 1996, 17371, 2243, 1998, 2179, 2041, 22794, 2099, 2057, 2655, 1996, 22851, 2072, 1035, 1015, 2080, 1035, 4031, 1035, 3443, 2000, 3443, 4031, 1998, 2044, 1996, 4031, 4325, 2003, 2589, 2057, 2191, 1037, 3584, 2655, 2000, 9699, 5576, 5371, 1006, 1012, 102, 101, 10514, 2080, 1007, 1996, 7170, 1999, 2291, 1053, 13535, 3938, 2575, 2003, 2205, 3082, 1012, 102, 101, 1996, 2527, 2024, 1999, 2561, 21064, 2692, 5576, 2556, 1999, 1996, 16713, 1012, 102, 101, 2144, 1996, 2291, 7170, 2003, 2205, 2152, 1996, 5576, 4325, 2003, 2108, 22313, 5833, 1998, 2057, 2064, 2191, 1037, 2067, 10497, 2655, 2005, 4526, 2122, 5576, 2613, 3064, 6764, 1012, 102, 101, 2000, 10663, 1996, 3277, 2057, 2342, 2000, 2191, 2291, 5514, 2011, 9268, 1996, 18162, 5576, 1012, 102, 101, 3443, 1037, 24369, 5576, 1999, 2151, 16713, 1016, 1012, 102, 101, 2079, 1037, 8816, 2004, 6100, 1998, 21296, 2000, 1053, 13535, 3938, 2575, 1012, 102, 101, 2224, 2023, 5576, 2005, 8285, 8585, 3141, 4708, 1012, 102, 101, 12362, 23233, 21564, 7632, 1045, 2139, 8569, 15567, 1996, 17371, 2243, 1998, 2179, 2041, 22794, 2099, 2057, 2655, 1996, 22851, 2072, 1035, 1015, 2080, 1035, 4031, 1035, 3443, 2000, 3443, 4031, 1998, 2044, 1996, 4031, 4325, 2003, 2589, 2057, 2191, 1037, 3584, 2655, 2000, 9699, 5576, 5371, 1006, 1012, 102, 101, 10514, 2080, 1007, 1996, 7170, 1999, 2291, 1053, 13535, 3938, 2575, 2003, 2205, 3082, 1012, 102, 101, 1996, 2527, 2024, 1999, 2561, 21064, 2692, 5576, 2556, 1999, 1996, 16713, 1012, 102, 101, 2144, 1996, 2291, 7170, 2003, 2205, 2152, 1996, 5576, 4325, 2003, 2108, 22313, 5833, 1998, 2057, 2064, 2191, 1037, 2067, 10497, 2655, 2005, 4526, 2122, 5576, 2613, 3064, 6764, 1012, 102, 101, 2000, 10663, 1996, 3277, 2057, 2342, 2000, 2191, 2291, 5514, 2011, 9268, 1996, 18162, 5576, 1012, 102, 101, 3443, 1037, 24369, 5576, 1999, 2151, 16713, 1016, 1012, 102, 101, 2079, 1037, 8816, 2004, 6100, 1998, 21296, 2000, 1053, 13535, 3938, 2575, 1012, 102, 101, 2224, 2023, 5576, 2005, 8285, 8585, 3141, 4708, 1012, 102], 'tgt': [1, 1996, 5576, 4325, 2003, 7989, 1999, 1053, 13535, 3938, 2575, 2007, 2035, 1996, 5198, 2007, 12391, 7561, 4471, 3038, 3166, 6648, 2003, 4394, 1012, 3, 1996, 3277, 2003, 2746, 2039, 2138, 1012, 3, 2026, 21572, 3501, 2003, 2025, 2893, 2580, 1999, 1060, 2890, 2361, 1012, 3, 1996, 7170, 1999, 2291, 1053, 13535, 3938, 2575, 2003, 2205, 3082, 1012, 3, 2045, 2024, 1999, 2561, 21064, 2692, 5576, 2556, 1999, 1996, 16713, 1012, 3, 2144, 1996, 2291, 7170, 2003, 2205, 2152, 1996, 5576, 4325, 2003, 2108, 22313, 5833, 1012, 3, 2000, 2147, 24490, 1024, 1015, 1012, 3, 3443, 1037, 24369, 5576, 1999, 2151, 16713, 1016, 1012, 3, 2079, 1037, 8816, 2004, 6100, 1998, 21296, 2000, 1053, 13535, 3938, 2575, 1012, 3, 1017, 1012, 3, 3443, 8983, 1999, 3938, 2575, 1012, 3, 1018, 1012, 3, 2224, 2023, 5576, 2005, 8285, 8585, 3141, 4708, 1012, 2], 'src_sent_labels': [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0], 'segs': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'clss': [0, 28, 42, 80, 88, 103, 123, 137, 149, 197, 214, 229, 261, 279, 290, 305, 316, 367, 384, 399, 431, 449, 460, 475], 'src_txt': ['hello colleagues the solution creation is failing in qtc 906 with all the users with generic error message saying authorisation is missing .', 'but with the same users the solutions were created previously also .', 'note : even after the error message is shown the solution can be seen in solution explorer window but when clicked on it to sync then it says solution does not exist in the system .', 'can you please check this ?', 'this is blocking us for creating solution to test our automates .', 'regards vishwanath a. telsang hi colleagues the issue is coming up because .', 'myproj is not getting created in xrep .', 'please take over and check what can be done .', 'hi i debugged the sdk and found out thar we call the pdi _ 1o _ product _ create to create product and after the product creation is done we make a separate call to generate solution file ( .', 'suo ) the load in system qtc 906 is too heavy .', 'thera are in total 4700 solution present in the tenant .', 'since the system load is too high the solution creation is being timedout and we can make a backend call for creating these solution realted files .', 'to resolve the issue we need to make system faster by removing the unwanted solution .', 'create a dummy solution in any tenant 2 .', 'do a download as copy and deploy to qtc 906 .', 'use this solution for automate related task .', 'regards nadeem hi i debugged the sdk and found out thar we call the pdi _ 1o _ product _ create to create product and after the product creation is done we make a separate call to generate solution file ( .', 'suo ) the load in system qtc 906 is too heavy .', 'thera are in total 4700 solution present in the tenant .', 'since the system load is too high the solution creation is being timedout and we can make a backend call for creating these solution realted files .', 'to resolve the issue we need to make system faster by removing the unwanted solution .', 'create a dummy solution in any tenant 2 .', 'do a download as copy and deploy to qtc 906 .', 'use this solution for automate related task .'], 'tgt_txt': 'the solution creation is failing in qtc 906 with all the users with generic error message saying authorisation is missing .<q>the issue is coming up because .<q>myproj is not getting created in xrep .<q>the load in system qtc 906 is too heavy .<q>there are in total 4700 solution present in the tenant .<q>since the system load is too high the solution creation is being timedout .<q>to workaround : 1 .<q>create a dummy solution in any tenant 2 .<q>do a download as copy and deploy to qtc 906 .<q>3 .<q>create patch in 906 .<q>4 .<q>use this solution for automate related task .'}
| MIT | Colab NoteBooks/BertSum_and_PreSumm_preprocessing.ipynb | gagan94/PreSumm |
Image analysis in Python with SciPy and scikit-imageTo participate, please follow the preparation instructions athttps://github.com/scikit-image/skimage-tutorials/(click on **preparation.md**).TL;DR: Install Python 3.6, scikit-image, and the Jupyter notebook. Then clone this repo:```pythongit clone --depth=1 https://github.com/scikit-image/skimage-tutorials```If you cloned it before today, use `git pull origin` to get the latest changes. | %run ../../check_setup.py | [✓] scikit-image 0.15.0
[✓] numpy 1.16.2
[✓] scipy 1.2.1
[✓] matplotlib 3.0.3
[✓] notebook 5.7.6
[✓] scikit-learn 0.20.3
| CC-BY-4.0 | workshops/2019-scipy/index.ipynb | whocaresustc/skimage-tutorials |
Tf-idf Walkthrough for scikit-learn When I was using scikit-learn extremely handy `TfidfVectorizer` I had some trouble interpreting the results since they seem to be a little bit different from the standard convention of how the *term frequency-inverse document frequency* (tf-idf) is calculated. Here, I just put together a brief overview of how the `TfidfVectorizer` works -- mainly as personal reference sheet, but maybe it is useful to one or the other. Sections- [What are Tf-idfs all about?](What-are-Tf-idfs-all-about?)- [Example data](Example-data)- [Raw term frequency](Raw-term-frequency)- [Normalized term frequency](Normalized-term-frequency)- [Term frequency-inverse document frequency -- tf-idf](Term-frequency-inverse-document-frequency----tf-idf)- [Inverse document frequency](Inverse-document-frequency)- [Normalized tf-idf](Normalized-tf-idf)- [Smooth idf](Smooth-idf)- [Tf-idf in scikit-learn](Tf-idf-in-scikit-learn)- [TfidfVectorizer defaults](TfidfVectorizer-defaults) What are Tf-idfs all about? [[back to top](Sections)] Tf-idfs are a way to represent documents as feature vectors. Tf-idfs can be understood as a modification of the *raw term frequencies* (tf); the tf is the count of how often a particular word occurs in a given document. The concept behind the tf-idf is to downweight terms proportionally to the number of documents in which they occur. Here, the idea is that terms that occur in many different documents are likely unimportant or don't contain any useful information for Natural Language Processing tasks such as document classification. Example data [[back to top](Sections)] For the following sections, let us consider the following dataset that consists of 3 documents: | import numpy as np
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining and the weather is sweet']) | _____no_output_____ | MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
Raw term frequency [[back to top](Sections)] First, we will start with the *raw term frequency* tf(t, d), which is defined by the number of times a term *t* occurs in a document *t*Alternative term frequency definitions include the binary term frequency, log-scaled term frequency, and augmented term frequency [[1](References)]. Using the [`CountVectorizer`](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) from scikit-learn, we can construct a bag-of-words model with the term frequencies as follows: | from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
tf = cv.fit_transform(docs).toarray()
tf
cv.vocabulary_ | _____no_output_____ | MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
Based on the vocabulary, the word "and" would be the 1st feature in each document vector in `tf`, the word "is" the 2nd, the word "shining" the 3rd, etc. Normalized term frequency [[back to top](Sections)] In this section, we will take a brief look at how the tf-feature vector can be normalized, which will be useful later. The most common way to normalize the raw term frequency is l2-normalization, i.e., dividing the raw term frequency vector $v$ by its length $||v||_2$ (L2- or Euclidean norm).$$v_{norm} = \frac{v}{||v||_2} = \frac{v}{\sqrt{v{_1}^2 + v{_2}^2 + \dots + v{_n}^2}} = \frac{v}{\big(\sum_{i=1}^n v_i \big)^{\frac{1}{2}}}$$ For example, we would normalize our 3rd document `'The sun is shining and the weather is sweet'` as follows: $\frac{[1, 2, 1, 1, 1, 2, 1]}{2} = [ 0.2773501, 0.5547002, 0.2773501, 0.2773501, 0.2773501, 0.5547002, 0.2773501]$ | tf_norm = tf[2] / np.sqrt(np.sum(tf[2]**2))
tf_norm | _____no_output_____ | MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
In scikit-learn, we can use the [`TfidfTransformer`](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html) to normalize the term frequencies if we disable the inverse document frequency calculation (`use_idf: False` and `smooth_idf=False`): | from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=False, norm='l2', smooth_idf=False)
tf_norm = tfidf.fit_transform(tf).toarray()
print('Normalized term frequencies of document 3:\n %s' % tf_norm[-1]) | Normalized term frequencies of document 3:
[ 0.2773501 0.5547002 0.2773501 0.2773501 0.2773501 0.5547002
0.2773501]
| MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
Term frequency-inverse document frequency -- tf-idf [[back to top](Sections)] Most commonly, the term frequency-inverse document frequency (tf-idf) is calculated as follows [[1](References)]: $$\text{tf-idf}(t, d) = \text{tf}(t, d) \times \text{idf}(t),$$where idf is the inverse document frequency, which we will introduce in the next section. Inverse document frequency [[back to top](Sections)] In order to understand the *inverse document frequency* idf, let us first introduce the term *document frequency* $\text{df}(d,t)$, which simply the number of documents $d$ that contain the term $t$. We can then define the idf as follows: $$\text{idf}(t) = log{\frac{n_d}{1+\text{df}(d,t)}},$$ where $n_d$: The total number of documents $\text{df}(d,t)$: The number of documents that contain term $t$.Note that the constant 1 is added to the denominator to avoid a zero-division error if a term is not contained in any document in the test dataset. Now, Let us calculate the idfs of the words "and", "is," and "shining:" | n_docs = len(docs)
df_and = 1
idf_and = np.log(n_docs / (1 + df_and))
print('idf "and": %s' % idf_and)
df_is = 3
idf_is = np.log(n_docs / (1 + df_is))
print('idf "is": %s' % idf_is)
df_shining = 2
idf_shining = np.log(n_docs / (1 + df_shining))
print('idf "shining": %s' % idf_shining) | idf "and": 0.405465108108
idf "is": -0.287682072452
idf "shining": 0.0
| MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
Using those idfs, we can eventually calculate the tf-idfs for the 3rd document: | print('Tf-idfs in document 3:\n')
print('tf-idf "and": %s' % (1 * idf_and))
print('tf-idf "is": %s' % (2 * idf_is))
print('tf-idf "shining": %s' % (1 * idf_shining)) | Tf-idfs in document 3:
tf-idf "and": 0.405465108108
tf-idf "is": -0.575364144904
tf-idf "shining": 0.0
| MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
Tf-idf in scikit-learn [[back to top](Sections)] The tf-idfs in scikit-learn are calculated a little bit differently. Here, the `+1` count is added to the idf, whereas instead of the denominator if the df: $$\text{idf}(t) = log{\frac{n_d}{\text{df}(d,t)}} + 1$$ We can demonstrate this by calculating the idfs manually using the equation above and comparing the results to the TfidfTransformer output using the settings `use_idf=True, smooth_idf=False, norm=None`.In the following examples, we will be again using the words "and," "is," and "shining:" from document 3. | tf_and = 1
df_and = 1
tf_and * (np.log(n_docs / df_and) + 1)
tf_shining = 2
df_shining = 3
tf_shining * (np.log(n_docs / df_shining) + 1)
tf_is = 1
df_is = 2
tf_is * (np.log(n_docs / df_is) + 1)
tfidf = TfidfTransformer(use_idf=True, smooth_idf=False, norm=None)
tfidf.fit_transform(tf).toarray()[-1][0:3] | _____no_output_____ | MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
Normalized tf-idf [[back to top](Sections)] Now, let us calculate the normalized tf-idfs. Our feature vector of un-normalized tf-idfs for document 3 would look as follows if we'd applied the equation from the previous section to all words in the document: | tf_idfs_d3 = np.array([[ 2.09861229, 2.0, 1.40546511, 1.40546511, 1.40546511, 2.0, 1.40546511]]) | _____no_output_____ | MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
Using the l2-norm, we then normalize the tf-idfs as follows: | tf_idfs_d3_norm = tf_idfs_d3[-1] / np.sqrt(np.sum(tf_idfs_d3[-1]**2))
tf_idfs_d3_norm | _____no_output_____ | MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
And finally, we compare the results to the results that the `TfidfTransformer` returns. | tfidf = TfidfTransformer(use_idf=True, smooth_idf=False, norm='l2')
tfidf.fit_transform(tf).toarray()[-1] | _____no_output_____ | MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
Smooth idf [[back to top](Sections)] Another parameter in the `TfidfTransformer` is the `smooth_idf`, which is described as> smooth_idf : boolean, default=True Smooth idf weights by adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once. Prevents zero divisions. So, our idf would then be defined as follows: $$\text{idf}(t) = log{\frac{1 + n_d}{1+\text{df}(d,t)}} + 1$$ To confirm that we understand the `smooth_idf` parameter correctly, let us walk through the 3-word example again: | tfidf = TfidfTransformer(use_idf=True, smooth_idf=True, norm=None)
tfidf.fit_transform(tf).toarray()[-1][:3]
tf_and = 1
df_and = 1
tf_and * (np.log((n_docs+1) / (df_and+1)) + 1)
tf_is = 2
df_is = 3
tf_is * (np.log((n_docs+1) / (df_is+1)) + 1)
tf_shining = 1
df_shining = 2
tf_shining * (np.log((n_docs+1) / (df_shining+1)) + 1) | _____no_output_____ | MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
TfidfVectorizer defaults [[back to top](Sections)] Finally, we now understand the default settings in the `TfidfTransformer`, which are: - `use_idf=True`- `smooth_idf=True`- `norm='l2'` And the equation can be summarized as $\text{tf-idf} = \text{tf}(t) \times (\text{idf}(t, d) + 1),$ where $\text{idf}(t) = log{\frac{1 + n_d}{1+\text{df}(d,t)}}.$ | tfidf = TfidfTransformer()
tfidf.fit_transform(tf).toarray()[-1]
smooth_tfidfs_d3 = np.array([[ 1.69314718, 2.0, 1.28768207, 1.28768207, 1.28768207, 2.0, 1.28768207]])
smooth_tfidfs_d3_norm = smooth_tfidfs_d3[-1] / np.sqrt(np.sum(smooth_tfidfs_d3[-1]**2))
smooth_tfidfs_d3_norm | _____no_output_____ | MIT | pattern-classification/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb | gopala-kr/ds-notebooks |
This notebook is part of the $\omega radlib$ documentation: http://wradlib.org/wradlib-docs.Copyright (c) 2018, $\omega radlib$ developers.Distributed under the MIT License. See LICENSE.txt for more info. Beam Blockage Calculation using a DEM Here, we derive (**p**artial) **b**eam-**b**lockage (**PBB**) from a **D**igital **E**levation **M**odel (**DEM**). We require- the local radar setup (sitecoords, number of rays, number of bins, antenna elevation, beamwidth, and the range resolution);- a **DEM** with a adequate resolution. Here we use pre-processed data from the [GTOPO30](https://lta.cr.usgs.gov/GTOPO30) and [SRTM](http://www2.jpl.nasa.gov/srtm) missions. | import wradlib as wrl
import matplotlib.pyplot as pl
import matplotlib as mpl
import warnings
warnings.filterwarnings('ignore')
try:
get_ipython().magic("matplotlib inline")
except:
pl.ion()
import numpy as np | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
Setup for Bonn radar First, we need to define some radar specifications (here: *University of Bonn*). | sitecoords = (7.071663, 50.73052, 99.5)
nrays = 360 # number of rays
nbins = 1000 # number of range bins
el = 1.0 # vertical antenna pointing angle (deg)
bw = 1.0 # half power beam width (deg)
range_res = 100. # range resolution (meters) | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
Create the range, azimuth, and beamradius arrays. | r = np.arange(nbins) * range_res
beamradius = wrl.util.half_power_radius(r, bw) | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
We use - [wradlib.georef.sweep_centroids](http://wradlib.org/wradlib-docs/latest/generated/wradlib.georef.sweep_centroids.html) and - [wradlib.georef.spherical_to_proj](http://wradlib.org/wradlib-docs/latest/generated/wradlib.georef.spherical_to_proj.html) to calculate the spherical coordinates of the bin centroids and their longitude, latitude and altitude. | coord = wrl.georef.sweep_centroids(nrays, range_res, nbins, el)
coords = wrl.georef.spherical_to_proj(coord[..., 0],
np.degrees(coord[..., 1]),
coord[..., 2], sitecoords)
lon = coords[..., 0]
lat = coords[..., 1]
alt = coords[..., 2]
polcoords = coords[..., :2]
print("lon,lat,alt:", coords.shape)
rlimits = (lon.min(), lat.min(), lon.max(), lat.max())
print("Radar bounding box:\n\t%.2f\n%.2f %.2f\n\t%.2f" %
(lat.max(), lon.min(), lon.max(), lat.min())) | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
Preprocessing the digitial elevation model - Read the DEM from a ``geotiff`` file (in `WRADLIB_DATA`);- clip the region inside the bounding box;- map the DEM values to the polar grid points. *Note*: You can choose between the coarser resolution `bonn_gtopo.tif` (from GTOPO30) and the finer resolution `bonn_new.tif` (from the SRTM mission).The DEM raster data is opened via [wradlib.io.open_raster](http://wradlib.org/wradlib-docs/latest/generated/wradlib.io.open_raster.html) and extracted via [wradlib.georef.extract_raster_dataset](http://wradlib.org/wradlib-docs/latest/generated/wradlib.georef.extract_raster_dataset.html). | #rasterfile = wrl.util.get_wradlib_data_file('geo/bonn_gtopo.tif')
rasterfile = wrl.util.get_wradlib_data_file('geo/bonn_new.tif')
ds = wrl.io.open_raster(rasterfile)
rastervalues, rastercoords, proj = wrl.georef.extract_raster_dataset(ds, nodata=-32768.)
# Clip the region inside our bounding box
ind = wrl.util.find_bbox_indices(rastercoords, rlimits)
rastercoords = rastercoords[ind[1]:ind[3], ind[0]:ind[2], ...]
rastervalues = rastervalues[ind[1]:ind[3], ind[0]:ind[2]]
# Map rastervalues to polar grid points
polarvalues = wrl.ipol.cart2irregular_spline(rastercoords, rastervalues,
polcoords, order=3,
prefilter=False) | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
Calculate Beam-Blockage Now we can finally apply the [wradlib.qual.beam_block_frac](http://wradlib.org/wradlib-docs/latest/generated/wradlib.qual.beam_block_frac.html) function to calculate the PBB. | PBB = wrl.qual.beam_block_frac(polarvalues, alt, beamradius)
PBB = np.ma.masked_invalid(PBB)
print(PBB.shape) | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
So far, we calculated the fraction of beam blockage for each bin.But we need to into account that the radar signal travels along a beam. Cumulative beam blockage (CBB) in one bin along a beam will always be at least as high as the maximum PBB of the preceeding bins (see [wradlib.qual.cum_beam_block_frac](http://wradlib.org/wradlib-docs/latest/generated/wradlib.qual.cum_beam_block_frac.html)) | CBB = wrl.qual.cum_beam_block_frac(PBB)
print(CBB.shape) | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
Visualize Beamblockage Now we visualize- the average terrain altitude per radar bin- a beam blockage map- interaction with terrain along a single beam | # just a little helper function to style x and y axes of our maps
def annotate_map(ax, cm=None, title=""):
ticks = (ax.get_xticks()/1000).astype(np.int)
ax.set_xticklabels(ticks)
ticks = (ax.get_yticks()/1000).astype(np.int)
ax.set_yticklabels(ticks)
ax.set_xlabel("Kilometers")
ax.set_ylabel("Kilometers")
if not cm is None:
pl.colorbar(cm, ax=ax)
if not title=="":
ax.set_title(title)
ax.grid()
fig = pl.figure(figsize=(10, 8))
# create subplots
ax1 = pl.subplot2grid((2, 2), (0, 0))
ax2 = pl.subplot2grid((2, 2), (0, 1))
ax3 = pl.subplot2grid((2, 2), (1, 0), colspan=2, rowspan=1)
# azimuth angle
angle = 225
# Plot terrain (on ax1)
ax1, dem = wrl.vis.plot_ppi(polarvalues,
ax=ax1, r=r,
az=np.degrees(coord[:,0,1]),
cmap=mpl.cm.terrain, vmin=0.)
ax1.plot([0,np.sin(np.radians(angle))*1e5],
[0,np.cos(np.radians(angle))*1e5],"r-")
ax1.plot(sitecoords[0], sitecoords[1], 'ro')
annotate_map(ax1, dem, 'Terrain within {0} km range'.format(np.max(r / 1000.) + 0.1))
# Plot CBB (on ax2)
ax2, cbb = wrl.vis.plot_ppi(CBB, ax=ax2, r=r,
az=np.degrees(coord[:,0,1]),
cmap=mpl.cm.PuRd, vmin=0, vmax=1)
annotate_map(ax2, cbb, 'Beam-Blockage Fraction')
# Plot single ray terrain profile on ax3
bc, = ax3.plot(r / 1000., alt[angle, :], '-b',
linewidth=3, label='Beam Center')
b3db, = ax3.plot(r / 1000., (alt[angle, :] + beamradius), ':b',
linewidth=1.5, label='3 dB Beam width')
ax3.plot(r / 1000., (alt[angle, :] - beamradius), ':b')
ax3.fill_between(r / 1000., 0.,
polarvalues[angle, :],
color='0.75')
ax3.set_xlim(0., np.max(r / 1000.) + 0.1)
ax3.set_ylim(0., 3000)
ax3.set_xlabel('Range (km)')
ax3.set_ylabel('Altitude (m)')
ax3.grid()
axb = ax3.twinx()
bbf, = axb.plot(r / 1000., CBB[angle, :], '-k',
label='BBF')
axb.set_ylabel('Beam-blockage fraction')
axb.set_ylim(0., 1.)
axb.set_xlim(0., np.max(r / 1000.) + 0.1)
legend = ax3.legend((bc, b3db, bbf),
('Beam Center', '3 dB Beam width', 'BBF'),
loc='upper left', fontsize=10) | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
Visualize Beam Propagation showing earth curvature Now we visualize- interaction with terrain along a single beamIn this representation the earth curvature is shown. For this we assume the earth a sphere with exactly 6370000 m radius. This is needed to get the height ticks at nice position. | def height_formatter(x, pos):
x = (x - 6370000) / 1000
fmt_str = '{:g}'.format(x)
return fmt_str
def range_formatter(x, pos):
x = x / 1000.
fmt_str = '{:g}'.format(x)
return fmt_str | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
- The [wradlib.vis.create_cg](http://wradlib.org/wradlib-docs/latest/generated/wradlib.vis.create_cg.html)-function is facilitated to create the curved geometries. - The actual data is plottet as (theta, range) on the parasite axis. - Some tweaking is needed to get the final plot look nice. | fig = pl.figure(figsize=(10, 6))
cgax, caax, paax = wrl.vis.create_cg('RHI', fig, 111)
# azimuth angle
angle = 225
# fix grid_helper
er = 6370000
gh = cgax.get_grid_helper()
gh.grid_finder.grid_locator2._nbins=80
gh.grid_finder.grid_locator2._steps=[1,2,4,5,10]
# calculate beam_height and arc_distance for ke=1
# means line of sight
bhe = wrl.georef.bin_altitude(r, 0, sitecoords[2], re=er, ke=1.)
ade = wrl.georef.bin_distance(r, 0, sitecoords[2], re=er, ke=1.)
nn0 = np.zeros_like(r)
# for nice plotting we assume earth_radius = 6370000 m
ecp = nn0 + er
# theta (arc_distance sector angle)
thetap = - np.degrees(ade/er) + 90.0
# zero degree elevation with standard refraction
bh0 = wrl.georef.bin_altitude(r, 0, sitecoords[2], re=er)
# plot (ecp is earth surface normal null)
bes, = paax.plot(thetap, ecp, '-k', linewidth=3, label='Earth Surface NN')
bc, = paax.plot(thetap, ecp + alt[angle, :], '-b', linewidth=3, label='Beam Center')
bc0r, = paax.plot(thetap, ecp + bh0 + alt[angle, 0] , '-g', label='0 deg Refraction')
bc0n, = paax.plot(thetap, ecp + bhe + alt[angle, 0], '-r', label='0 deg line of sight')
b3db, = paax.plot(thetap, ecp + alt[angle, :] + beamradius, ':b', label='+3 dB Beam width')
paax.plot(thetap, ecp + alt[angle, :] - beamradius, ':b', label='-3 dB Beam width')
# orography
paax.fill_between(thetap, ecp,
ecp + polarvalues[angle, :],
color='0.75')
# shape axes
cgax.set_xlim(0, np.max(ade))
cgax.set_ylim([ecp.min()-1000, ecp.max()+2500])
caax.grid(True, axis='x')
cgax.grid(True, axis='y')
cgax.axis['top'].toggle(all=False)
caax.yaxis.set_major_locator(mpl.ticker.MaxNLocator(steps=[1,2,4,5,10], nbins=20, prune='both'))
caax.xaxis.set_major_locator(mpl.ticker.MaxNLocator())
caax.yaxis.set_major_formatter(mpl.ticker.FuncFormatter(height_formatter))
caax.xaxis.set_major_formatter(mpl.ticker.FuncFormatter(range_formatter))
caax.set_xlabel('Range (km)')
caax.set_ylabel('Altitude (km)')
legend = paax.legend((bes, bc0n, bc0r, bc, b3db),
('Earth Surface NN', '0 deg line of sight', '0 deg std refraction', 'Beam Center', '3 dB Beam width'),
loc='upper left', fontsize=10) | _____no_output_____ | MIT | notebooks/beamblockage/wradlib_beamblock.ipynb | kmuehlbauer/wradlib-notebooks |
Deploy SageMaker Serverless Endpoint Sentiment Binary Classification (fine-tuning with KoELECTRA-Small-v3 model and Naver Sentiment Movie Corpus dataset)- KoELECTRA: https://github.com/monologg/KoELECTRA- Naver Sentiment Movie Corpus Dataset: https://github.com/e9t/nsmc--- OverviewAmazon SageMaker Serverless Inference는 re:Invent 2021에 런칭된 신규 추론 옵션으로 호스팅 인프라 관리에 대한 부담 없이 머신 러닝을 모델을 쉽게 배포하고 확장할 수 있도록 제작된 신규 추론 옵션입니다. SageMaker Serverless Inference는 컴퓨팅 리소스를 자동으로 시작하고 트래픽에 따라 자동으로 스케일 인/아웃을 수행하므로 인스턴스 유형을 선택하거나 스케일링 정책을 관리할 필요가 없습니다. 따라서, 트래픽 급증 사이에 유휴 기간이 있고 콜드 스타트를 허용할 수 있는 워크로드에 이상적입니다. Difference from Lambda Serverless Inference Lambda Serverless Inference- Lambda 컨테이너용 도커 이미지 빌드/디버그 후 Amazon ECR(Amazon Elastic Container Registry)에 푸시- Option 1: Lambda 함수를 생성하여 직접 모델 배포 수행- Option 2: SageMaker API로 SageMaker에서 모델 배포 수행 (`LambdaModel` 및 `LambdaPredictor` 리소스를 순차적으로 생성) 단, Option 2를 사용하는 경우 적절한 권한을 직접 설정해 줘야 합니다. - SageMaker과 연결된 role 대해 ECR 억세스를 허용하는 policy 생성 및 연결 - SageMaker 노트북에서 lambda를 실행할 수 있는 role 생성 - Lambda 함수가 ECR private 리포지토리에 연결하는 억세스를 허용하는 policy 생성 및 연결 SageMaker Serverless Inference기존 Endpoint 배포 코드에서 Endpoint 설정만 변경해 주시면 되며, 별도의 도커 이미지 빌드가 필요 없기에 쉽고 빠르게 서버리스 추론을 수행할 수 있습니다.**주의**- 현재 서울 리전을 지원하지 않기 때문에 아래 리전 중 하나를 선택해서 수행하셔야 합니다. - 현재 지원하는 리전: US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo) and Asia Pacific (Sydney)- boto3, botocore, sagemaker, awscli는 2021년 12월 버전 이후를 사용하셔야 합니다. 1. Upload Model Artifacts---모델을 아카이빙하여 S3로 업로드합니다. | !pip install -qU sagemaker botocore boto3 awscli
!pip install --ignore-installed PyYAML
!pip install transformers==4.12.5
import torch
import torchvision
import torchvision.models as models
import sagemaker
from sagemaker import get_execution_role
from sagemaker.utils import name_from_base
from sagemaker.pytorch import PyTorchModel
import boto3
import datetime
import time
from time import strftime,gmtime
import json
import os
import io
import torchvision.transforms as transforms
from src.utils import print_outputs, upload_model_artifact_to_s3, NLPPredictor
role = get_execution_role()
boto_session = boto3.session.Session()
sm_session = sagemaker.session.Session()
sm_client = boto_session.client("sagemaker")
sm_runtime = boto_session.client("sagemaker-runtime")
region = boto_session.region_name
bucket = sm_session.default_bucket()
prefix = 'serverless-inference-kornlp-nsmc'
print(f'region = {region}')
print(f'role = {role}')
print(f'bucket = {bucket}')
print(f'prefix = {prefix}')
model_variant = 'modelA'
nlp_task = 'nsmc'
model_path = f'model-{nlp_task}'
model_s3_uri = upload_model_artifact_to_s3(model_variant, model_path, bucket, prefix) | _____no_output_____ | MIT | key_features/ptn_4.2_serverless-inference/serverless_endpoint_kornlp_nsmc.ipynb | aws-samples/sm-model-serving-patterns |
2. Create SageMaker Serverless Endpoint---SageMaker Serverless Endpoint는 기존 SageMaker 리얼타임 엔드포인트 배포와 99% 유사합니다. 1%의 차이가 무엇일까요? Endpoint 구성 설정 시, ServerlessConfig에서 메모리 크기(`MemorySizeInMB`), 최대 동시 접속(`MaxConcurrency`)에 대한 파라메터만 추가하시면 됩니다.```pythonsm_client.create_endpoint_config( ... "ServerlessConfig": { "MemorySizeInMB": 2048, "MaxConcurrency": 20 })```자세한 내용은 아래 링크를 참조해 주세요.- Amazon SageMaker Developer Guide - Serverless Inference: https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html Create Inference containter definition for Model | from sagemaker.image_uris import retrieve
deploy_instance_type = 'ml.m5.xlarge'
pt_ecr_image_uri = retrieve(
framework='pytorch',
region=region,
version='1.7.1',
py_version='py3',
instance_type = deploy_instance_type,
accelerator_type=None,
image_scope='inference'
) | _____no_output_____ | MIT | key_features/ptn_4.2_serverless-inference/serverless_endpoint_kornlp_nsmc.ipynb | aws-samples/sm-model-serving-patterns |
Create a SageMaker Model`create_model` API를 호출하여 위 코드 셀에서 생성한 컨테이너의 정의를 포함하는 모델을 생성합니다. | model_name = f"KorNLPServerless-{nlp_task}-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
create_model_response = sm_client.create_model(
ModelName=model_name,
Containers=[
{
"Image": pt_ecr_image_uri,
"Mode": "SingleModel",
"ModelDataUrl": model_s3_uri,
"Environment": {
"SAGEMAKER_CONTAINER_LOG_LEVEL": "20",
"SAGEMAKER_PROGRAM": "inference_nsmc.py",
"SAGEMAKER_SUBMIT_DIRECTORY": model_s3_uri,
},
}
],
ExecutionRoleArn=role,
)
print(f"Created Model: {create_model_response['ModelArn']}") | _____no_output_____ | MIT | key_features/ptn_4.2_serverless-inference/serverless_endpoint_kornlp_nsmc.ipynb | aws-samples/sm-model-serving-patterns |
Create Endpoint Configuration`ServerlessConfig`으로 엔드포인트에 대한 서버리스 설정을 조정할 수 있습니다. 최대 동시 호출(`MaxConcurrency`; max concurrent invocations)은 1에서 50 사이이며, `MemorySize`는 1024MB, 2048MB, 3072MB, 4096MB, 5120MB 또는 6144MB를 선택할 수 있습니다. | endpoint_config_name = f"KorNLPServerlessEndpointConfig-{nlp_task}-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
endpoint_config_response = sm_client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"VariantName": "AllTraffic",
"ModelName": model_name,
"ServerlessConfig": {
"MemorySizeInMB": 4096,
"MaxConcurrency": 20,
},
},
],
)
print(f"Created EndpointConfig: {endpoint_config_response['EndpointConfigArn']}") | _____no_output_____ | MIT | key_features/ptn_4.2_serverless-inference/serverless_endpoint_kornlp_nsmc.ipynb | aws-samples/sm-model-serving-patterns |
Create a SageMaker Multi-container endpointcreate_endpoint API로 멀티 컨테이너 엔드포인트를 생성합니다. 기존의 엔드포인트 생성 방법과 동일합니다. | endpoint_name = f"KorNLPServerlessEndpoint-{nlp_task}-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}"
endpoint_response = sm_client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name
)
print(f"Creating Endpoint: {endpoint_response['EndpointArn']}") | _____no_output_____ | MIT | key_features/ptn_4.2_serverless-inference/serverless_endpoint_kornlp_nsmc.ipynb | aws-samples/sm-model-serving-patterns |
`describe_endpoint` API를 사용하여 엔드포인트 생성 상태를 확인할 수 있습니다. SageMaker 서버리스 엔드포인트는 일반적인 엔드포인트 생성보다 빠르게 생성됩니다. (약 2-3분) | %%time
waiter = boto3.client('sagemaker').get_waiter('endpoint_in_service')
print("Waiting for endpoint to create...")
waiter.wait(EndpointName=endpoint_name)
resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
print(f"Endpoint Status: {resp['EndpointStatus']}") | _____no_output_____ | MIT | key_features/ptn_4.2_serverless-inference/serverless_endpoint_kornlp_nsmc.ipynb | aws-samples/sm-model-serving-patterns |
Direct Invocation for Model 최초 호출 시 Cold start로 지연 시간이 발생하지만, 최초 호출 이후에는 warm 상태를 유지하기 때문에 빠르게 응답합니다. 물론 수 분 동안 호출이 되지 않거나 요청이 많아지면 cold 상태로 바뀐다는 점을 유의해 주세요. | model_sample_path = 'samples/nsmc.txt'
!cat $model_sample_path
with open(model_sample_path, mode='rb') as file:
model_input_data = file.read()
model_response = sm_runtime.invoke_endpoint(
EndpointName=endpoint_name,
ContentType="application/jsonlines",
Accept="application/jsonlines",
Body=model_input_data
)
model_outputs = model_response['Body'].read().decode()
print()
print_outputs(model_outputs) | _____no_output_____ | MIT | key_features/ptn_4.2_serverless-inference/serverless_endpoint_kornlp_nsmc.ipynb | aws-samples/sm-model-serving-patterns |
Check Model Latency | import time
start = time.time()
for _ in range(10):
model_response = sm_runtime.invoke_endpoint(
EndpointName=endpoint_name,
ContentType="application/jsonlines",
Accept="application/jsonlines",
Body=model_input_data
)
inference_time = (time.time()-start)
print(f'Inference time is {inference_time:.4f} ms.') | _____no_output_____ | MIT | key_features/ptn_4.2_serverless-inference/serverless_endpoint_kornlp_nsmc.ipynb | aws-samples/sm-model-serving-patterns |
Clean Up--- | sm_client.delete_endpoint(EndpointName=endpoint_name)
sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)
sm_client.delete_model(ModelName=model_name) | _____no_output_____ | MIT | key_features/ptn_4.2_serverless-inference/serverless_endpoint_kornlp_nsmc.ipynb | aws-samples/sm-model-serving-patterns |
Datenvalidierung mit Voluptuous (Schemadefinitionen)In diesem Notebook verwenden wir [Voluptuous](https://github.com/alecthomas/voluptuous), um Schemata für unsere Daten zu definieren. Wir können dann die Schemaprüfung an verschiedenen Stellen unserer Bereinigung verwenden, um sicherzustellen, dass wir die Kriterien erfüllen. Schließllich können wir Ausnahmen für die Schemaüberprüfung verwenden, um unreine oder ungültige Daten zu markieren, beiseite zu legen oder zu entfernen. 1. Importe | import logging
import pandas as pd
from datetime import datetime
from voluptuous import Schema, Required, Range, All, ALLOW_EXTRA
from voluptuous.error import MultipleInvalid, Invalid | _____no_output_____ | BSD-3-Clause | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de |
2. Logger | logger = logging.getLogger(0)
logger.setLevel(logging.WARNING) | _____no_output_____ | BSD-3-Clause | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de |
3. Beispieldaten lesen | sales = pd.read_csv('https://raw.githubusercontent.com/kjam/data-cleaning-101/master/data/sales_data.csv') | _____no_output_____ | BSD-3-Clause | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de |
4. Daten untersuchen | sales.head()
sales.dtypes | _____no_output_____ | BSD-3-Clause | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de |
5. Schema definieren | schema = Schema({
Required('sale_amount'): All(float,
Range(min=2.50, max=1450.99)),
}, extra=ALLOW_EXTRA)
error_count = 0
for s_id, sale in sales.T.to_dict().items():
try:
schema(sale)
except MultipleInvalid as e:
logging.warning('issue with sale: %s (%s) - %s',
s_id, sale['sale_amount'], e)
error_count += 1
error_count
sales.shape | _____no_output_____ | BSD-3-Clause | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de |
Aktuell wissen wir jedoch noch nicht, ob* wir ein falsch definiertes Schema haben* möglicherweise negative Werte zurückgegeben oder falsch markiert werden* höhere Werte kombinierte Einkäufe oder Sonderverkäufe sind 6. Hinzufügen einer benutzerdefinierten Validierung | def ValidDate(fmt='%Y-%m-%d %H:%M:%S'):
return lambda v: datetime.strptime(v, fmt)
schema = Schema({
Required('timestamp'): All(ValidDate()),
}, extra=ALLOW_EXTRA)
error_count = 0
for s_id, sale in sales.T.to_dict().items():
try:
schema(sale)
except MultipleInvalid as e:
logging.warning('issue with sale: %s (%s) - %s',
s_id, sale['timestamp'], e)
error_count += 1
error_count | _____no_output_____ | BSD-3-Clause | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de |
7. Gültige Datumsstrukturen sind noch keine gültigen Daten | def ValidDate(fmt='%Y-%m-%d %H:%M:%S'):
def validation_func(v):
try:
assert datetime.strptime(v, fmt) <= datetime.now()
except AssertionError:
raise Invalid('date is in the future! %s' % v)
return validation_func
schema = Schema({
Required('timestamp'): All(ValidDate()),
}, extra=ALLOW_EXTRA)
error_count = 0
for s_id, sale in sales.T.to_dict().items():
try:
schema(sale)
except MultipleInvalid as e:
logging.warning('issue with sale: %s (%s) - %s',
s_id, sale['timestamp'], e)
error_count += 1
error_count | _____no_output_____ | BSD-3-Clause | docs/clean-prep/voluptuous.ipynb | veit/jupyter-tutorial-de |
open boundary spin 1/2 1-D Heisenberg model$H = J \sum_i [S_i^z S_{i+1}^z + \frac{1}{2}(S_i^+ S_{i+1}^- + S_i^- S_{i+1}^+)]$exact result (Bethe Anstatz):L E/J16 -6.911737145574924 -10.453785760409632 -13.997315618224348 -21.085956314386364 -28.1754248597421 | from renormalizer.mps import Mps, Mpo, solver
from renormalizer.model import MolList2, ModelTranslator
from renormalizer.utils import basis as ba
from renormalizer.utils import Op
import numpy as np
# define the # of spins
nspin = 16
# define the model
# sigma^+ = S^+
# sigma^- = S^-
# 1/2 sigma^x,y,z = S^x,y,z
model = dict()
for ispin in range(nspin-1):
model[(f"e_{ispin}", f"e_{ispin+1}")] = [(Op("sigma_z",0),
Op("sigma_z",0), 1.0/4), (Op("sigma_+",0), Op("sigma_-",0), 1.0/2),
(Op("sigma_-",0), Op("sigma_+", 0), 1.0/2)]
# set the spin order and local basis
order = {}
basis = []
for ispin in range(nspin):
order[f"e_{ispin}"] = ispin
basis.append(ba.BasisHalfSpin(sigmaqn=[0,0]))
# construct MPO
mol_list2 = MolList2(order, basis, model, ModelTranslator.general_model)
mpo = Mpo(mol_list2)
print(f"mpo_bond_dims:{mpo.bond_dims}")
# set the sweep paramter
M=30
procedure = [[M, 0.2], [M, 0], [M, 0], [M,0], [M,0]]
# initialize a random MPS
qntot = 0
mps = Mps.random(mol_list2, qntot, M)
mps.optimize_config.procedure = procedure
mps.optimize_config.method = "2site"
# optimize MPS
energies = solver.optimize_mps_dmrg(mps.copy(), mpo)
print("gs energy:", energies.min())
# plot the microiterations vs energy error
import matplotlib.pyplot as plt
import logging
mpl_logger = logging.getLogger('matplotlib')
mpl_logger.setLevel(logging.WARNING)
plt.rc('font', family='Times New Roman', size=16)
plt.rc('axes', linewidth=1.5)
plt.rcParams['lines.linewidth'] = 2
std = -6.9117371455749
fig, ax = plt.subplots(figsize=(8,6))
plt.plot(np.arange(len(energies)), np.array(energies)-std,"o-",ms=5)
plt.yscale('log')
plt.xlabel("microiterations")
plt.ylabel("$\Delta$ E")
plt.ylim(1e-9, 10)
plt.show() | _____no_output_____ | Apache-2.0 | example/1D-Heisenberg.ipynb | liwt31/Renormalizer |
Striplog from CSV | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import striplog
striplog.__version__
from striplog import Lexicon, Decor, Component, Legend, Interval, Striplog | _____no_output_____ | Apache-2.0 | striplog_def_Matt.ipynb | ThomasMGeo/CSV2Strip |
Make legendMost of the stuff in the dicts you made were about **display** — so they are going to make `Decor` objects. A collection of `Decor`s makes a `Legend`. A `Legend` determines how a striplog is displayed.First I'll make the components, since those are easy. I'll move `'train'` into there too, since it is to do with the rocks, not the display. If it seems weird having `'train'` in the `Component` (which is really supposed to be about direct descriptions of the rock, but the idea is that it's always the same for all specimens of that rock so it does fit here) then you could put it in `data` instead. | facies = {
's': Component({'lithology': 'sandstone', 'train':'y'}),
'i': Component({'lithology': 'interbedded', 'train':'y'}),
'sh': Component({'lithology': 'shale', 'train':'y'}),
'bs': Component({'lithology': 'sandstone', 'train': 'n'}),
't': Component({'lithology': 'turbidite', 'train':'y'}),
'nc': Component({'lithology': 'none', 'train':'n'}),
}
sandstone = Decor({
'component': facies['s'],
'colour': 'yellow',
'hatch': '.',
'width': '3',
})
interbedded = Decor({
'component': facies['i'],
'colour': 'darkseagreen',
'hatch': '--',
'width': '2',
})
shale = Decor({
'component': facies['sh'],
'colour': 'darkgray',
'hatch': '-',
'width': '1',
})
badsand = Decor({
'component': facies['bs'],
'colour': 'orange',
'hatch': '.',
'width': '3',
})
# Not sure about the best way to do this, probably better
# just to omit those intervals completely.
nocore = Decor({
'component': facies['nc'],
'colour': 'white',
'hatch': '/',
'width': '5',
})
turbidite = Decor({
'component': facies['t'],
'colour': 'green',
'hatch': 'xxx',
'width': '3',
})
legend = Legend([sandstone, badsand, interbedded, shale, turbidite, nocore])
legend | _____no_output_____ | Apache-2.0 | striplog_def_Matt.ipynb | ThomasMGeo/CSV2Strip |
Read CSV into striplog | strip = Striplog.from_csv('test.csv')
strip[0] | _____no_output_____ | Apache-2.0 | striplog_def_Matt.ipynb | ThomasMGeo/CSV2Strip |
Deal with lithologyThe lithology has been turned into a component, but it's using the abbreviation... I can't figure out an elegant way to deal with this so, for now, we'll just loop over the striplog and fix it. We read the `data` item's lithology (`'s'` in the top layer), then look up the correct lithology name in our abbreviation dictionary, then add the new component in the proper place. Finally, we delete the `data` we had. | for s in strip:
lith = s.data['lithology']
s.components = [facies[lith]]
s.data = {}
strip[0] | _____no_output_____ | Apache-2.0 | striplog_def_Matt.ipynb | ThomasMGeo/CSV2Strip |
That's better! | strip.plot(legend) | _____no_output_____ | Apache-2.0 | striplog_def_Matt.ipynb | ThomasMGeo/CSV2Strip |
Remove non-training layers | strip
strip_train = Striplog([s for s in strip if s.primary['train'] == 'y'])
strip_train
strip_train.plot(legend) | _____no_output_____ | Apache-2.0 | striplog_def_Matt.ipynb | ThomasMGeo/CSV2Strip |
This notebook aims to produce Figure S2 in Supplementary Material for the preprint: | import os
import random
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.dates as mdates
import datetime
import time
import csv
import math
import scipy
import seaborn as sns
from scipy.stats import iqr
import h5py
import pickle
from tqdm import tqdm
import copy
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import LabelEncoder
import iisignature
from datetime import date
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from classifiers import *
from data_cleaning import *
from data_transforms import *
from ROC_functions import * | _____no_output_____ | Apache-2.0 | ROC_classification.ipynb | yuewu57/mental_health_AMoSS |
Load data | test_path='./all-true-colours-matlab-2017-02-27-18-38-12-nick/'
participants_list, participants_data_list,\
participants_time_list=loadParticipants(test_path)
Participants=make_classes(participants_data_list,\
participants_time_list,\
participants_list)
cohort=cleaning_sameweek_data(cleaning_same_data(Participants)) | 14050
| Apache-2.0 | ROC_classification.ipynb | yuewu57/mental_health_AMoSS |
**Length=20w** * Missing-response-incorporated signature-based classification model (MRSCM, level2) | y_tests_mc, y_scores_mc=model_roc(cohort, minlen=20, training=0.7,order=2, sample_size=100) )
plot_roc(y_tests_mc, y_scores_mc, "signature_method_missingcount_2ndtrial", n_classes=3,lw=2) | The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
| Apache-2.0 | ROC_classification.ipynb | yuewu57/mental_health_AMoSS |
* Missing-response-incorporated signature-based classification model (MRSCM, level3) | y_tests_mc3, y_scores_mc3=model_roc(cohort, minlen=20, training=0.7,order=3, sample_size=100)
plot_roc(y_tests_mc3, y_scores_mc3, "signature_method_missingcount3", n_classes=3,lw=2) | The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
| Apache-2.0 | ROC_classification.ipynb | yuewu57/mental_health_AMoSS |
* Naive classification model | if __name__ == "__main__":
y_tests_naive, y_scores_naive=model_roc(cohort, minlen=20,\
training=0.7,\
order=None,\
sample_size=100,\
standardise=False,\
count=False,\
feedforward=False,\
naive=True,\
time=False)
plot_roc(y_tests_naive, y_scores_naive, "naive_method", n_classes=3,lw=2) | The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
| Apache-2.0 | ROC_classification.ipynb | yuewu57/mental_health_AMoSS |
Copyright 2018 The TensorFlow Authors. | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE. | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。 | try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。 | fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。 | class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。 | train_images.shape | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
同様に、訓練用データセットには60,000個のラベルが含まれます。 | len(train_labels) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
ラベルはそれぞれ、0から9までの間の整数です。 | train_labels | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。 | test_images.shape | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
テスト用データセットには10,000個のラベルが含まれます。 | len(test_labels) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。 | plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show() | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。 | train_images = train_images / 255.0
test_images = test_images / 255.0 | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。 | plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show() | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。 | model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
]) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。 | model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。 | model.fit(train_images, train_labels, epochs=5) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。 | test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。 | predictions = model.predict(test_images) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。 | predictions[0] | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。 | np.argmax(predictions[0]) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。 | test_labels[0] | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
10チャンネルすべてをグラフ化してみることができます。 | def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue') | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
0番目の画像と、予測、予測配列を見てみましょう。 | i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show() | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。 | # X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show() | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。 | # テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。 | # 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
そして、予測を行います。 | predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。 | np.argmax(predictions_single[0]) | _____no_output_____ | Apache-2.0 | site/ja/tutorials/keras/classification.ipynb | KarimaTouati/docs-l10n |
PA005: High Value Customer Identification (Insiders) Ciclo 0 - Planejamento da solução (IOT) Ciclo 1 - Métricas de Validação de Clusters 1. Feature Engineering* Recency* Frequency* Monetary2. Métricas de Validação de clusters* WSS - (Within-Cluster Sum of Squares)* SS - (Silhouette Score)3. Cluster Analisys* Plot 3d* Cluster profile Ciclo 2 - Análise de Silhouette 1. Feature Engineering* Average Ticket2. Análise de Silhouette* Silhouette Analysis3. Clustering visualization* UMAP4. Cluster análise de perfil* Descrição dos centroides dos clusters Ciclo 3 - Statistical Descriptive 1. Análise Descritiva* Atributos Numéricos* Atributos Categóricos2. Data Filtering* Retornos e Compras3. Feature Engineering* Utilização dos dados de compra* Average Recency* Number of returns Input - Entrada 1. Problema de negócio --Selecionar os clientes mais valiosos para integrar um programa de fidelização 2. Conjunto de dados --Vendas de um e-commerce, durante um período de um ano Output - Saída 1. A idicação de clientes que farão parte do programa de Insiders --Lista: client_id | is_insider 455534 | yes 433524 | no 2. Relatório com as respostas das perguntas de negócios 2.0. Quem são as pessoas elegíveis para fazer parte do grupo de Insiders? 2.1. Quantos clientes farão parte do grupo? 2.2. Quais as principais características desses clientes? 2.3. Qual a porcentagem de contribuição do faturamento vindo dos Insiders? 2.4. Qual a expectativa de faturamento desse grupo para os próximos meses? 2.5. Quais as condições para umma pessoa ser elegível ao Insiders? 2.6. Quais as condições para umma pessoa ser removida do Insiders? 2.7. Qual a garantia que o programa Insiders é melhor que o restante da base? 2.8. Quais ações o time de marketing pode realizar para aumentar o faturamento? Tasks - Tarefas 0. Quem são as pessoas elegíveis para fazer parte do grupo de Insiders? - O que são clientes de maior "valor"? - Faturamento - Alto ticket médio - Alto LTV - Baixa recência - Baixa probabilidade de churn - Alto basket size - Alta previsão LTV - Alta propensão de compra - Custo - Baixa taxa de devolução - Experiência de compra - Média alta das avaliações 1. Quantos clientes farão parte do grupo? - Número total de clientes - % do grupo de Insiders 2. Quais as principais características desses clientes? -Características dos clientes: - Idade - Localização - Características de consumo: - Atributos da clusterização 3. Qual a porcentagem de contribuição do faturamento vindo dos Insiders? - Faturamento do ano - Faturamento dos Insiders 4. Qual a expectativa de faturamento desse grupo para os próximos meses? - LTV do grupo Insiders - Análise de Cohort 5. Quais as condições para umma pessoa ser elegível ao Insiders? - Definir periodicidade - A pessoa precisa ser similar ou parecido com uma pessoa do grupo 6. Quais as condições para umma pessoa ser removida do Insiders? - Definir periodicidade - A pessoa precisa ser dissimilar ou parecido com uma pessoa do grupo 7. Qual a garantia que o programa Insiders é melhor que o restante da base? - Teste A/B - Teste A/B Bayesiano - Teste de hipóteses 8. Quais ações o time de marketing pode realizar para aumentar o faturamento? - Desconto - Preferência de compra - Frente - Visita a empresa Benchmark de soluções 1. Desk research Modelo RFM1. Recency a) Tempo desde a última compra b) Responsividade2. Frequency a) Tempo médio entre as transações b) Engajamento 3. Monetary a) Total gasto, faturamento b) 'High-value purchases' 0.0. Imports |
from yellowbrick.cluster import KElbowVisualizer
from yellowbrick.cluster import SilhouetteVisualizer
from matplotlib import pyplot as plt
from sklearn import cluster as c
from sklearn import metrics as m
from sklearn import preprocessing as pp
from plotly import express as px
import pandas as pd
import seaborn as sns
import numpy as np
import re
import umap.umap_ as umap
| /home/heitor/repos/insiders_clustering/venv/lib/python3.8/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
| MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
0.2. Helper Functions | pd.set_option('display.float_format', lambda x: '%.2f' % x)
def num_attributes(df1):
num_attributes = df1.select_dtypes(['int64', 'float64'])
#central tendency
ct1 = pd.DataFrame(num_attributes.apply(np.mean)).T
ct2 = pd.DataFrame(num_attributes.apply(np.median)).T
#dispersion
d1 = pd.DataFrame(num_attributes.apply(np.min)).T
d2 = pd.DataFrame(num_attributes.apply(np.max)).T
d3 = pd.DataFrame(num_attributes.apply(lambda x: x.max() - x.min())).T
d4 = pd.DataFrame(num_attributes.apply(np.std)).T
d5 = pd.DataFrame(num_attributes.apply(lambda x: x.skew())).T
d6 = pd.DataFrame(num_attributes.apply(lambda x: x.kurtosis())).T
m = pd.concat( [d1, d2, d3, ct1, ct2, d4, d5, d6] ).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std','skew', 'kurtosis']
return m | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
0.3. Load Data | df_raw = pd.read_csv(r'../data/Ecommerce.csv')
| _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.0. Data Description | df1 = df_raw.copy() | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.1. Rename Columns | df1.columns
df1.columns = ['invoice_no', 'stock_code', 'description', 'quantity', 'invoice_date',
'unit_price', 'customer_id', 'country'] | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.2. Data Shape | print(f'Number of rows: {df1.shape[0]}')
print(f'Number of columns: {df1.shape[1]}') | Number of rows: 541909
Number of columns: 8
| MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.3. Data Types | df1.dtypes | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.4. Check NAs | df1.isna().sum() | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.5. Fill NAs | #remove na
df1 = df1.dropna(axis=0)
print('Data removed: {:.0f}%'.format((1-(len(df1)/len(df_raw)))*100)) | Data removed: 25%
| MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.6. Change dtypes | df1.dtypes
#invoice_no
# df1['invoice_no'] = df1['invoice_no'].astype(int)
#stock_code
# df1['stock_code'] = df1['stock_code'].astype(int)
#invoice_date --> Month --> b
df1['invoice_date'] = pd.to_datetime(df1['invoice_date'], format=('%d-%b-%y'))
#customer_id
df1['customer_id'] = df1['customer_id'].astype(int)
df1.dtypes | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.7. Descriptive statistics | num_attributes = df1.select_dtypes(['int64', 'float64'])
cat_attributes = df1.select_dtypes(exclude = ['int64', 'float64', 'datetime64[ns]']) | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.7.1. Numerical Attributes | m = num_attributes(df1)
m | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
1.7.1.1 Investigating 1. Negative quantity (devolution?)2. Price = 0 (Promo?) 1.7.2. Categorical Attributes | cat_attributes.head() | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
Invoice no |
#invoice_no -- some of them has one char
df_invoice_char = df1.loc[df1['invoice_no'].apply(lambda x: bool(re.search('[^0-9]+', x))), :]
len(df_invoice_char[df_invoice_char['quantity']<0])
print('Total of invoices with letter: {}'.format(len(df_invoice_char)))
print('Total of negative quantaty: {}'.format(len(df1[df1['quantity']<0])))
print('Letter means negative quantity') | _____no_output_____ | MIT | notebooks/code3.ipynb | heitorfe/insiders_clustering |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.