body_hash
stringlengths 64
64
| body
stringlengths 23
109k
| docstring
stringlengths 1
57k
| path
stringlengths 4
198
| name
stringlengths 1
115
| repository_name
stringlengths 7
111
| repository_stars
float64 0
191k
| lang
stringclasses 1
value | body_without_docstring
stringlengths 14
108k
| unified
stringlengths 45
133k
|
---|---|---|---|---|---|---|---|---|---|
1baf2940903a4adb39006bfddbaf4c19f579a9dbed5ccbef9f6ade058fb60211 | def set_channel_compensation(self, r, g, b, c):
"Set the channel compensation scale factors.\n\n :param r: multiplier for red channel\n :param g: multiplier for green channel\n :param b: multiplier for blue channel\n :param c: multiplier for clear channel\n\n If you intend to measure a particular class of objects, say a set of matching wooden blocks with similar reflectivity and paint finish\n you should calibrate the channel compensation until you see colour values that broadly represent the colour of the objects you're testing.\n\n The default values were derived by testing a set of 5 Red, Green, Blue, Yellow and Orange wooden blocks.\n\n These scale factors are applied in `get_rgbc_raw` right after the raw values are read from the sensor.\n\n "
self._channel_compensation = (r, g, b, c) | Set the channel compensation scale factors.
:param r: multiplier for red channel
:param g: multiplier for green channel
:param b: multiplier for blue channel
:param c: multiplier for clear channel
If you intend to measure a particular class of objects, say a set of matching wooden blocks with similar reflectivity and paint finish
you should calibrate the channel compensation until you see colour values that broadly represent the colour of the objects you're testing.
The default values were derived by testing a set of 5 Red, Green, Blue, Yellow and Orange wooden blocks.
These scale factors are applied in `get_rgbc_raw` right after the raw values are read from the sensor. | library/bh1745/__init__.py | set_channel_compensation | pimoroni/bh1745-python | 8 | python | def set_channel_compensation(self, r, g, b, c):
"Set the channel compensation scale factors.\n\n :param r: multiplier for red channel\n :param g: multiplier for green channel\n :param b: multiplier for blue channel\n :param c: multiplier for clear channel\n\n If you intend to measure a particular class of objects, say a set of matching wooden blocks with similar reflectivity and paint finish\n you should calibrate the channel compensation until you see colour values that broadly represent the colour of the objects you're testing.\n\n The default values were derived by testing a set of 5 Red, Green, Blue, Yellow and Orange wooden blocks.\n\n These scale factors are applied in `get_rgbc_raw` right after the raw values are read from the sensor.\n\n "
self._channel_compensation = (r, g, b, c) | def set_channel_compensation(self, r, g, b, c):
"Set the channel compensation scale factors.\n\n :param r: multiplier for red channel\n :param g: multiplier for green channel\n :param b: multiplier for blue channel\n :param c: multiplier for clear channel\n\n If you intend to measure a particular class of objects, say a set of matching wooden blocks with similar reflectivity and paint finish\n you should calibrate the channel compensation until you see colour values that broadly represent the colour of the objects you're testing.\n\n The default values were derived by testing a set of 5 Red, Green, Blue, Yellow and Orange wooden blocks.\n\n These scale factors are applied in `get_rgbc_raw` right after the raw values are read from the sensor.\n\n "
self._channel_compensation = (r, g, b, c)<|docstring|>Set the channel compensation scale factors.
:param r: multiplier for red channel
:param g: multiplier for green channel
:param b: multiplier for blue channel
:param c: multiplier for clear channel
If you intend to measure a particular class of objects, say a set of matching wooden blocks with similar reflectivity and paint finish
you should calibrate the channel compensation until you see colour values that broadly represent the colour of the objects you're testing.
The default values were derived by testing a set of 5 Red, Green, Blue, Yellow and Orange wooden blocks.
These scale factors are applied in `get_rgbc_raw` right after the raw values are read from the sensor.<|endoftext|> |
22a21dccf2a923576be682b9977a926331ed0b7ed33c70df88defb2bb06fa599 | def enable_white_balance(self, enable):
'Enable scale compensation for the channels.\n\n :param enable: True to enable, False to disable\n\n See: `set_channel_compensation` for details.\n\n '
self._enable_channel_compensation = (True if enable else False) | Enable scale compensation for the channels.
:param enable: True to enable, False to disable
See: `set_channel_compensation` for details. | library/bh1745/__init__.py | enable_white_balance | pimoroni/bh1745-python | 8 | python | def enable_white_balance(self, enable):
'Enable scale compensation for the channels.\n\n :param enable: True to enable, False to disable\n\n See: `set_channel_compensation` for details.\n\n '
self._enable_channel_compensation = (True if enable else False) | def enable_white_balance(self, enable):
'Enable scale compensation for the channels.\n\n :param enable: True to enable, False to disable\n\n See: `set_channel_compensation` for details.\n\n '
self._enable_channel_compensation = (True if enable else False)<|docstring|>Enable scale compensation for the channels.
:param enable: True to enable, False to disable
See: `set_channel_compensation` for details.<|endoftext|> |
61a04aa5866bd21ec0da92a3df4855513233f0b3bc7cf10e35de334a7fa1247e | def get_rgbc_raw(self):
'Return the raw Red, Green, Blue and Clear readings.'
self.setup()
colour_data = self._bh1745.get('COLOUR_DATA')
(r, g, b, c) = (colour_data.red, colour_data.green, colour_data.blue, colour_data.clear)
if self._enable_channel_compensation:
(cr, cg, cb, cc) = self._channel_compensation
(r, g, b, c) = ((r * cr), (g * cg), (b * cb), (c * cc))
return (r, g, b, c) | Return the raw Red, Green, Blue and Clear readings. | library/bh1745/__init__.py | get_rgbc_raw | pimoroni/bh1745-python | 8 | python | def get_rgbc_raw(self):
self.setup()
colour_data = self._bh1745.get('COLOUR_DATA')
(r, g, b, c) = (colour_data.red, colour_data.green, colour_data.blue, colour_data.clear)
if self._enable_channel_compensation:
(cr, cg, cb, cc) = self._channel_compensation
(r, g, b, c) = ((r * cr), (g * cg), (b * cb), (c * cc))
return (r, g, b, c) | def get_rgbc_raw(self):
self.setup()
colour_data = self._bh1745.get('COLOUR_DATA')
(r, g, b, c) = (colour_data.red, colour_data.green, colour_data.blue, colour_data.clear)
if self._enable_channel_compensation:
(cr, cg, cb, cc) = self._channel_compensation
(r, g, b, c) = ((r * cr), (g * cg), (b * cb), (c * cc))
return (r, g, b, c)<|docstring|>Return the raw Red, Green, Blue and Clear readings.<|endoftext|> |
01dc2a53a637b80471c3e79a0e73f203846680a6101bc1482518325271f1ec25 | def get_rgb_clamped(self):
'Return an RGB value scaled against max(r, g, b).\n\n This will clamp/saturate one of the colour channels, providing a clearer idea\n of what primary colour an object is most likely to be.\n\n However the resulting colour reading will not be accurate for other purposes.\n\n '
(r, g, b, c) = self.get_rgbc_raw()
div = max(r, g, b)
if (div > 0):
(r, g, b) = [int(((x / float(div)) * 255)) for x in (r, g, b)]
return (r, g, b)
return (0, 0, 0) | Return an RGB value scaled against max(r, g, b).
This will clamp/saturate one of the colour channels, providing a clearer idea
of what primary colour an object is most likely to be.
However the resulting colour reading will not be accurate for other purposes. | library/bh1745/__init__.py | get_rgb_clamped | pimoroni/bh1745-python | 8 | python | def get_rgb_clamped(self):
'Return an RGB value scaled against max(r, g, b).\n\n This will clamp/saturate one of the colour channels, providing a clearer idea\n of what primary colour an object is most likely to be.\n\n However the resulting colour reading will not be accurate for other purposes.\n\n '
(r, g, b, c) = self.get_rgbc_raw()
div = max(r, g, b)
if (div > 0):
(r, g, b) = [int(((x / float(div)) * 255)) for x in (r, g, b)]
return (r, g, b)
return (0, 0, 0) | def get_rgb_clamped(self):
'Return an RGB value scaled against max(r, g, b).\n\n This will clamp/saturate one of the colour channels, providing a clearer idea\n of what primary colour an object is most likely to be.\n\n However the resulting colour reading will not be accurate for other purposes.\n\n '
(r, g, b, c) = self.get_rgbc_raw()
div = max(r, g, b)
if (div > 0):
(r, g, b) = [int(((x / float(div)) * 255)) for x in (r, g, b)]
return (r, g, b)
return (0, 0, 0)<|docstring|>Return an RGB value scaled against max(r, g, b).
This will clamp/saturate one of the colour channels, providing a clearer idea
of what primary colour an object is most likely to be.
However the resulting colour reading will not be accurate for other purposes.<|endoftext|> |
4d0744f179a40c95d4a8413e18ea2b41f7e64abfa9de35ff8e6c9aee3e2fb993 | def get_rgb_scaled(self):
'Return an RGB value scaled against the clear channel.'
(r, g, b, c) = self.get_rgbc_raw()
if (c > 0):
(r, g, b) = [min(255, int(((x / float(c)) * 255))) for x in (r, g, b)]
return (r, g, b)
return (0, 0, 0) | Return an RGB value scaled against the clear channel. | library/bh1745/__init__.py | get_rgb_scaled | pimoroni/bh1745-python | 8 | python | def get_rgb_scaled(self):
(r, g, b, c) = self.get_rgbc_raw()
if (c > 0):
(r, g, b) = [min(255, int(((x / float(c)) * 255))) for x in (r, g, b)]
return (r, g, b)
return (0, 0, 0) | def get_rgb_scaled(self):
(r, g, b, c) = self.get_rgbc_raw()
if (c > 0):
(r, g, b) = [min(255, int(((x / float(c)) * 255))) for x in (r, g, b)]
return (r, g, b)
return (0, 0, 0)<|docstring|>Return an RGB value scaled against the clear channel.<|endoftext|> |
e55fbd4491f9253d223ff840d5210e76262a2686e6b62f088afa092dca31853a | def __init__(self, mode, max_depth=3, feature_names=None, max_sentences=20000, exp_rand_tree_size=True, tree_generator=None):
"\n mode: 'classify' or 'regress'\n max_depth: maximum depth of trained trees\n feature_names: names of features\n max_sentences: maximum number of extracted sentences\n exp_rand_tree_size: Having trees with different sizes\n tree_generator: Tree generator model (overwrites above features)\n "
self.feature_names = feature_names
self.mode = mode
max_leafs = (2 ** max_depth)
num_trees = (max_sentences // max_leafs)
if (tree_generator is None):
tree_generator = RandomForestClassifier(num_trees, max_depth=max_depth)
self.exp_rand_tree_size = exp_rand_tree_size
self.rf = RuleFit(rfmode=mode, tree_size=max_leafs, max_rules=max_sentences, tree_generator=tree_generator, exp_rand_tree_size=True, fit_lasso=False, Cs=(10.0 ** np.arange((- 4), 1)), cv=3) | mode: 'classify' or 'regress'
max_depth: maximum depth of trained trees
feature_names: names of features
max_sentences: maximum number of extracted sentences
exp_rand_tree_size: Having trees with different sizes
tree_generator: Tree generator model (overwrites above features) | featurevec.py | __init__ | anonymousicmlsubmission/FeatureVectors | 0 | python | def __init__(self, mode, max_depth=3, feature_names=None, max_sentences=20000, exp_rand_tree_size=True, tree_generator=None):
"\n mode: 'classify' or 'regress'\n max_depth: maximum depth of trained trees\n feature_names: names of features\n max_sentences: maximum number of extracted sentences\n exp_rand_tree_size: Having trees with different sizes\n tree_generator: Tree generator model (overwrites above features)\n "
self.feature_names = feature_names
self.mode = mode
max_leafs = (2 ** max_depth)
num_trees = (max_sentences // max_leafs)
if (tree_generator is None):
tree_generator = RandomForestClassifier(num_trees, max_depth=max_depth)
self.exp_rand_tree_size = exp_rand_tree_size
self.rf = RuleFit(rfmode=mode, tree_size=max_leafs, max_rules=max_sentences, tree_generator=tree_generator, exp_rand_tree_size=True, fit_lasso=False, Cs=(10.0 ** np.arange((- 4), 1)), cv=3) | def __init__(self, mode, max_depth=3, feature_names=None, max_sentences=20000, exp_rand_tree_size=True, tree_generator=None):
"\n mode: 'classify' or 'regress'\n max_depth: maximum depth of trained trees\n feature_names: names of features\n max_sentences: maximum number of extracted sentences\n exp_rand_tree_size: Having trees with different sizes\n tree_generator: Tree generator model (overwrites above features)\n "
self.feature_names = feature_names
self.mode = mode
max_leafs = (2 ** max_depth)
num_trees = (max_sentences // max_leafs)
if (tree_generator is None):
tree_generator = RandomForestClassifier(num_trees, max_depth=max_depth)
self.exp_rand_tree_size = exp_rand_tree_size
self.rf = RuleFit(rfmode=mode, tree_size=max_leafs, max_rules=max_sentences, tree_generator=tree_generator, exp_rand_tree_size=True, fit_lasso=False, Cs=(10.0 ** np.arange((- 4), 1)), cv=3)<|docstring|>mode: 'classify' or 'regress'
max_depth: maximum depth of trained trees
feature_names: names of features
max_sentences: maximum number of extracted sentences
exp_rand_tree_size: Having trees with different sizes
tree_generator: Tree generator model (overwrites above features)<|endoftext|> |
7b7e625170aec90a5ceb482bb560f7c78cc7b2ba60b236ea5d81b93a719933e2 | def fit(self, X, y, restart=True, bagging=0):
'Fit the tree model.\n X: inputs\n y: outputs (integer class label or real value)\n restart: To train from scratch tree generator model\n bagging: If >0 applies bagging on trees to compute confidence intervals\n '
if (not bagging):
bagging = 0
dimred = TruncatedSVD(2)
self.rf.fit(X, y, restart=restart)
rules = self.rf.get_rules()['rule'].values
cm = cooccurance_matrix(rules, X.shape[(- 1)])
vectors = dimred.fit_transform(cm)
vectors = normalize_angles(vectors)
self.norms = np.clip(np.linalg.norm(vectors, axis=(- 1), keepdims=True), 1e-12, None)
vectors /= np.max(self.norms)
self.vectors = vectors
self.importance = np.linalg.norm(self.vectors, axis=(- 1))
self.angles = np.arctan2(self.vectors[(:, 1)], self.vectors[(:, 0)])
self.stds = np.zeros(vectors.shape)
self.predictor = self.rf.tree_generator
if bagging:
all_vectors = []
for _ in range(bagging):
self.rf.bag_trees(X, y)
rules_bag = self.rf.get_rules()['rule'].values
cm_bag = cooccurance_matrix(rules_bag, X.shape[(- 1)])
vectors_bag = dimred.fit_transform(cm_bag)
vectors_bag = normalize_angles(vectors_bag)
norms_bag = np.clip(np.linalg.norm(vectors_bag, axis=(- 1), keepdims=True), 1e-12, None)
all_vectors.append((vectors_bag / norms_bag))
self.stds = np.std(all_vectors, 0) | Fit the tree model.
X: inputs
y: outputs (integer class label or real value)
restart: To train from scratch tree generator model
bagging: If >0 applies bagging on trees to compute confidence intervals | featurevec.py | fit | anonymousicmlsubmission/FeatureVectors | 0 | python | def fit(self, X, y, restart=True, bagging=0):
'Fit the tree model.\n X: inputs\n y: outputs (integer class label or real value)\n restart: To train from scratch tree generator model\n bagging: If >0 applies bagging on trees to compute confidence intervals\n '
if (not bagging):
bagging = 0
dimred = TruncatedSVD(2)
self.rf.fit(X, y, restart=restart)
rules = self.rf.get_rules()['rule'].values
cm = cooccurance_matrix(rules, X.shape[(- 1)])
vectors = dimred.fit_transform(cm)
vectors = normalize_angles(vectors)
self.norms = np.clip(np.linalg.norm(vectors, axis=(- 1), keepdims=True), 1e-12, None)
vectors /= np.max(self.norms)
self.vectors = vectors
self.importance = np.linalg.norm(self.vectors, axis=(- 1))
self.angles = np.arctan2(self.vectors[(:, 1)], self.vectors[(:, 0)])
self.stds = np.zeros(vectors.shape)
self.predictor = self.rf.tree_generator
if bagging:
all_vectors = []
for _ in range(bagging):
self.rf.bag_trees(X, y)
rules_bag = self.rf.get_rules()['rule'].values
cm_bag = cooccurance_matrix(rules_bag, X.shape[(- 1)])
vectors_bag = dimred.fit_transform(cm_bag)
vectors_bag = normalize_angles(vectors_bag)
norms_bag = np.clip(np.linalg.norm(vectors_bag, axis=(- 1), keepdims=True), 1e-12, None)
all_vectors.append((vectors_bag / norms_bag))
self.stds = np.std(all_vectors, 0) | def fit(self, X, y, restart=True, bagging=0):
'Fit the tree model.\n X: inputs\n y: outputs (integer class label or real value)\n restart: To train from scratch tree generator model\n bagging: If >0 applies bagging on trees to compute confidence intervals\n '
if (not bagging):
bagging = 0
dimred = TruncatedSVD(2)
self.rf.fit(X, y, restart=restart)
rules = self.rf.get_rules()['rule'].values
cm = cooccurance_matrix(rules, X.shape[(- 1)])
vectors = dimred.fit_transform(cm)
vectors = normalize_angles(vectors)
self.norms = np.clip(np.linalg.norm(vectors, axis=(- 1), keepdims=True), 1e-12, None)
vectors /= np.max(self.norms)
self.vectors = vectors
self.importance = np.linalg.norm(self.vectors, axis=(- 1))
self.angles = np.arctan2(self.vectors[(:, 1)], self.vectors[(:, 0)])
self.stds = np.zeros(vectors.shape)
self.predictor = self.rf.tree_generator
if bagging:
all_vectors = []
for _ in range(bagging):
self.rf.bag_trees(X, y)
rules_bag = self.rf.get_rules()['rule'].values
cm_bag = cooccurance_matrix(rules_bag, X.shape[(- 1)])
vectors_bag = dimred.fit_transform(cm_bag)
vectors_bag = normalize_angles(vectors_bag)
norms_bag = np.clip(np.linalg.norm(vectors_bag, axis=(- 1), keepdims=True), 1e-12, None)
all_vectors.append((vectors_bag / norms_bag))
self.stds = np.std(all_vectors, 0)<|docstring|>Fit the tree model.
X: inputs
y: outputs (integer class label or real value)
restart: To train from scratch tree generator model
bagging: If >0 applies bagging on trees to compute confidence intervals<|endoftext|> |
c6a36276e9817138756935b4eee675ac229c3ad6d8feb44c6d1d8be1378416e3 | def plot(self, dynamic=True, confidence=True, path=None):
'Plot the feature-vectors.\n dynamic: If True the output is a dynamic html plot. Otherwise, it will be an image.\n confidence: To show confidence intervals or not\n path: Path to save the image. If dy\n '
mx = 1.1
angles = np.arctan2(self.vectors[(:, 1)], self.vectors[(:, 0)])
max_angle = np.max(np.abs(angles))
feature_names = (self.feature_names + ['origin', ''])
plot_vectors = np.concatenate([self.vectors, [[0, 0], [0, 0]]])
vectors_sizes = np.linalg.norm(plot_vectors, axis=(- 1))
plot_angles = np.concatenate([angles, [(- max_angle), max_angle]])
plot_data = np.stack([plot_vectors[(:, 1)], plot_vectors[(:, 0)], plot_angles, feature_names], axis=(- 1))
plot_df = pd.DataFrame(data=plot_data, columns=['x', 'y', 'angles', 'names'])
plot_df[['x', 'y', 'angles']] = plot_df[['x', 'y', 'angles']].apply(pd.to_numeric)
if dynamic:
fig = px.scatter(plot_df, x='x', y='y', color='angles', width=1000, height=500, hover_name=feature_names, hover_data={'x': False, 'y': False, 'angles': False, 'names': False}, color_continuous_scale=px.colors.sequential.Rainbow)
fig.update_yaxes(visible=False, showticklabels=False, range=[0, mx])
fig.update_xaxes(visible=False, showticklabels=False, range=[(- mx), mx])
else:
fig = px.scatter(plot_df, x='x', y='y', color='angles', width=1000, height=500, hover_name='names', hover_data={'x': False, 'y': False, 'angles': False, 'names': False}, color_continuous_scale=px.colors.sequential.Rainbow)
max_name_len = max([len(i) for i in feature_names])
for i in range((len(plot_vectors) - 2)):
if (plot_vectors[(:, 1)][i] > 0):
name = (feature_names[i] + ''.join(([' '] * (max_name_len - len(feature_names[i])))))
ax = (plot_vectors[(:, 1)][i] + 0.2)
else:
name = (''.join(([' '] * (max_name_len - len(feature_names[i])))) + feature_names[i])
ax = (plot_vectors[(:, 1)][i] - 0.2)
if (vectors_sizes[i] < 0.2):
continue
fig.add_annotation(x=plot_vectors[(:, 1)][i], y=plot_vectors[(:, 0)][i], text=(feature_names[i] + ''.join(([' '] * (max_name_len - len(feature_names[i]))))), font=dict(size=15), axref='x', ayref='y', ax=ax, ay=plot_vectors[(:, 0)][i], arrowhead=2)
fig.update_yaxes(visible=False, showticklabels=False, range=[0, mx])
fig.update_xaxes(visible=False, showticklabels=False, range=[(- mx), mx])
fig.update_traces(marker=dict(size=10), textfont_size=15)
fig.update(layout_coloraxis_showscale=False)
fig.update_layout(showlegend=False)
for i in range(10):
fig.add_shape(type='circle', x0=(((i + 1) / 10) * mx), y0=(((i + 1) / 10) * mx), x1=(((- (i + 1)) / 10) * mx), y1=(((- (i + 1)) / 10) * mx), line_color='red', opacity=0.5, line=dict(dash='dot', width=3))
if confidence:
for (vector, std, angle) in zip(self.vectors, self.stds, angles):
fig.add_shape(type='circle', x0=(vector[1] + (3 * std[1])), y0=(vector[0] + (3 * std[0])), x1=(vector[1] - (3 * std[1])), y1=(vector[0] - (3 * std[0])), line_color='gray', opacity=0.5, line=dict(dash='solid', width=1))
fig.show()
if path:
if ((len(path.split('/')) > 1) and (not os.path.exists('/'.join(path.split('/')[:(- 1)])))):
os.makedirs('/'.join(path.split('/')[:(- 1)]))
if dynamic:
assert (path.split('.')[(- 1)] == 'html'), 'For a dynamic figure, path should be an html file!'
fig.write_html(path)
else:
fig.write_image(path) | Plot the feature-vectors.
dynamic: If True the output is a dynamic html plot. Otherwise, it will be an image.
confidence: To show confidence intervals or not
path: Path to save the image. If dy | featurevec.py | plot | anonymousicmlsubmission/FeatureVectors | 0 | python | def plot(self, dynamic=True, confidence=True, path=None):
'Plot the feature-vectors.\n dynamic: If True the output is a dynamic html plot. Otherwise, it will be an image.\n confidence: To show confidence intervals or not\n path: Path to save the image. If dy\n '
mx = 1.1
angles = np.arctan2(self.vectors[(:, 1)], self.vectors[(:, 0)])
max_angle = np.max(np.abs(angles))
feature_names = (self.feature_names + ['origin', ])
plot_vectors = np.concatenate([self.vectors, [[0, 0], [0, 0]]])
vectors_sizes = np.linalg.norm(plot_vectors, axis=(- 1))
plot_angles = np.concatenate([angles, [(- max_angle), max_angle]])
plot_data = np.stack([plot_vectors[(:, 1)], plot_vectors[(:, 0)], plot_angles, feature_names], axis=(- 1))
plot_df = pd.DataFrame(data=plot_data, columns=['x', 'y', 'angles', 'names'])
plot_df[['x', 'y', 'angles']] = plot_df[['x', 'y', 'angles']].apply(pd.to_numeric)
if dynamic:
fig = px.scatter(plot_df, x='x', y='y', color='angles', width=1000, height=500, hover_name=feature_names, hover_data={'x': False, 'y': False, 'angles': False, 'names': False}, color_continuous_scale=px.colors.sequential.Rainbow)
fig.update_yaxes(visible=False, showticklabels=False, range=[0, mx])
fig.update_xaxes(visible=False, showticklabels=False, range=[(- mx), mx])
else:
fig = px.scatter(plot_df, x='x', y='y', color='angles', width=1000, height=500, hover_name='names', hover_data={'x': False, 'y': False, 'angles': False, 'names': False}, color_continuous_scale=px.colors.sequential.Rainbow)
max_name_len = max([len(i) for i in feature_names])
for i in range((len(plot_vectors) - 2)):
if (plot_vectors[(:, 1)][i] > 0):
name = (feature_names[i] + .join(([' '] * (max_name_len - len(feature_names[i])))))
ax = (plot_vectors[(:, 1)][i] + 0.2)
else:
name = (.join(([' '] * (max_name_len - len(feature_names[i])))) + feature_names[i])
ax = (plot_vectors[(:, 1)][i] - 0.2)
if (vectors_sizes[i] < 0.2):
continue
fig.add_annotation(x=plot_vectors[(:, 1)][i], y=plot_vectors[(:, 0)][i], text=(feature_names[i] + .join(([' '] * (max_name_len - len(feature_names[i]))))), font=dict(size=15), axref='x', ayref='y', ax=ax, ay=plot_vectors[(:, 0)][i], arrowhead=2)
fig.update_yaxes(visible=False, showticklabels=False, range=[0, mx])
fig.update_xaxes(visible=False, showticklabels=False, range=[(- mx), mx])
fig.update_traces(marker=dict(size=10), textfont_size=15)
fig.update(layout_coloraxis_showscale=False)
fig.update_layout(showlegend=False)
for i in range(10):
fig.add_shape(type='circle', x0=(((i + 1) / 10) * mx), y0=(((i + 1) / 10) * mx), x1=(((- (i + 1)) / 10) * mx), y1=(((- (i + 1)) / 10) * mx), line_color='red', opacity=0.5, line=dict(dash='dot', width=3))
if confidence:
for (vector, std, angle) in zip(self.vectors, self.stds, angles):
fig.add_shape(type='circle', x0=(vector[1] + (3 * std[1])), y0=(vector[0] + (3 * std[0])), x1=(vector[1] - (3 * std[1])), y1=(vector[0] - (3 * std[0])), line_color='gray', opacity=0.5, line=dict(dash='solid', width=1))
fig.show()
if path:
if ((len(path.split('/')) > 1) and (not os.path.exists('/'.join(path.split('/')[:(- 1)])))):
os.makedirs('/'.join(path.split('/')[:(- 1)]))
if dynamic:
assert (path.split('.')[(- 1)] == 'html'), 'For a dynamic figure, path should be an html file!'
fig.write_html(path)
else:
fig.write_image(path) | def plot(self, dynamic=True, confidence=True, path=None):
'Plot the feature-vectors.\n dynamic: If True the output is a dynamic html plot. Otherwise, it will be an image.\n confidence: To show confidence intervals or not\n path: Path to save the image. If dy\n '
mx = 1.1
angles = np.arctan2(self.vectors[(:, 1)], self.vectors[(:, 0)])
max_angle = np.max(np.abs(angles))
feature_names = (self.feature_names + ['origin', ])
plot_vectors = np.concatenate([self.vectors, [[0, 0], [0, 0]]])
vectors_sizes = np.linalg.norm(plot_vectors, axis=(- 1))
plot_angles = np.concatenate([angles, [(- max_angle), max_angle]])
plot_data = np.stack([plot_vectors[(:, 1)], plot_vectors[(:, 0)], plot_angles, feature_names], axis=(- 1))
plot_df = pd.DataFrame(data=plot_data, columns=['x', 'y', 'angles', 'names'])
plot_df[['x', 'y', 'angles']] = plot_df[['x', 'y', 'angles']].apply(pd.to_numeric)
if dynamic:
fig = px.scatter(plot_df, x='x', y='y', color='angles', width=1000, height=500, hover_name=feature_names, hover_data={'x': False, 'y': False, 'angles': False, 'names': False}, color_continuous_scale=px.colors.sequential.Rainbow)
fig.update_yaxes(visible=False, showticklabels=False, range=[0, mx])
fig.update_xaxes(visible=False, showticklabels=False, range=[(- mx), mx])
else:
fig = px.scatter(plot_df, x='x', y='y', color='angles', width=1000, height=500, hover_name='names', hover_data={'x': False, 'y': False, 'angles': False, 'names': False}, color_continuous_scale=px.colors.sequential.Rainbow)
max_name_len = max([len(i) for i in feature_names])
for i in range((len(plot_vectors) - 2)):
if (plot_vectors[(:, 1)][i] > 0):
name = (feature_names[i] + .join(([' '] * (max_name_len - len(feature_names[i])))))
ax = (plot_vectors[(:, 1)][i] + 0.2)
else:
name = (.join(([' '] * (max_name_len - len(feature_names[i])))) + feature_names[i])
ax = (plot_vectors[(:, 1)][i] - 0.2)
if (vectors_sizes[i] < 0.2):
continue
fig.add_annotation(x=plot_vectors[(:, 1)][i], y=plot_vectors[(:, 0)][i], text=(feature_names[i] + .join(([' '] * (max_name_len - len(feature_names[i]))))), font=dict(size=15), axref='x', ayref='y', ax=ax, ay=plot_vectors[(:, 0)][i], arrowhead=2)
fig.update_yaxes(visible=False, showticklabels=False, range=[0, mx])
fig.update_xaxes(visible=False, showticklabels=False, range=[(- mx), mx])
fig.update_traces(marker=dict(size=10), textfont_size=15)
fig.update(layout_coloraxis_showscale=False)
fig.update_layout(showlegend=False)
for i in range(10):
fig.add_shape(type='circle', x0=(((i + 1) / 10) * mx), y0=(((i + 1) / 10) * mx), x1=(((- (i + 1)) / 10) * mx), y1=(((- (i + 1)) / 10) * mx), line_color='red', opacity=0.5, line=dict(dash='dot', width=3))
if confidence:
for (vector, std, angle) in zip(self.vectors, self.stds, angles):
fig.add_shape(type='circle', x0=(vector[1] + (3 * std[1])), y0=(vector[0] + (3 * std[0])), x1=(vector[1] - (3 * std[1])), y1=(vector[0] - (3 * std[0])), line_color='gray', opacity=0.5, line=dict(dash='solid', width=1))
fig.show()
if path:
if ((len(path.split('/')) > 1) and (not os.path.exists('/'.join(path.split('/')[:(- 1)])))):
os.makedirs('/'.join(path.split('/')[:(- 1)]))
if dynamic:
assert (path.split('.')[(- 1)] == 'html'), 'For a dynamic figure, path should be an html file!'
fig.write_html(path)
else:
fig.write_image(path)<|docstring|>Plot the feature-vectors.
dynamic: If True the output is a dynamic html plot. Otherwise, it will be an image.
confidence: To show confidence intervals or not
path: Path to save the image. If dy<|endoftext|> |
8a728c0aacf430cec2b4fc98f2f277c6918da2f2b51537967f11ddd3539e754b | def read_sql_query(uri, sql, where, where_tmp=None, meta=None, kwargs=None):
'\n Create a dask dataframe from SQL using explicit partitioning\n\n Parameters\n ----------\n uri: str\n connection string (sql sqlalchemy documentation)\n sql: str\n SQL query to execute\n where: list of str or list of tuple\n Either a set of explicit partitioning statements (e.g.,\n `"WHERE index_col < 50"`...) or pairs of valued to be entered into\n where_template, if using\n where_tmp: str (optional)\n Template for generating partition selection clauses, using the\n values from where_values, e.g.,\n `"WHERE index_col >= {} AND index_col < {}"`\n meta: dataframe metadata (optional)\n If given, a zero-length version of the dataframe structure, with\n index and column names and types correctly specified. Can also be\n the same information in dictionary or tuple of tuples format\n kwargs: dict\n Any further parameters to pass to pd.read_sql_query, see\n its documentation\n '
import dask
import dask.dataframe as dd
if (where_tmp is not None):
where = [where_tmp.format(values) for values in where]
if (kwargs is None):
kwargs = {}
dload = dask.delayed(load_part)
parts = [dload(sql, uri, w, kwargs) for w in where]
return dd.from_delayed(parts, meta=meta) | Create a dask dataframe from SQL using explicit partitioning
Parameters
----------
uri: str
connection string (sql sqlalchemy documentation)
sql: str
SQL query to execute
where: list of str or list of tuple
Either a set of explicit partitioning statements (e.g.,
`"WHERE index_col < 50"`...) or pairs of valued to be entered into
where_template, if using
where_tmp: str (optional)
Template for generating partition selection clauses, using the
values from where_values, e.g.,
`"WHERE index_col >= {} AND index_col < {}"`
meta: dataframe metadata (optional)
If given, a zero-length version of the dataframe structure, with
index and column names and types correctly specified. Can also be
the same information in dictionary or tuple of tuples format
kwargs: dict
Any further parameters to pass to pd.read_sql_query, see
its documentation | intake_sql/intake_sql.py | read_sql_query | mounte/intake-sql | 14 | python | def read_sql_query(uri, sql, where, where_tmp=None, meta=None, kwargs=None):
'\n Create a dask dataframe from SQL using explicit partitioning\n\n Parameters\n ----------\n uri: str\n connection string (sql sqlalchemy documentation)\n sql: str\n SQL query to execute\n where: list of str or list of tuple\n Either a set of explicit partitioning statements (e.g.,\n `"WHERE index_col < 50"`...) or pairs of valued to be entered into\n where_template, if using\n where_tmp: str (optional)\n Template for generating partition selection clauses, using the\n values from where_values, e.g.,\n `"WHERE index_col >= {} AND index_col < {}"`\n meta: dataframe metadata (optional)\n If given, a zero-length version of the dataframe structure, with\n index and column names and types correctly specified. Can also be\n the same information in dictionary or tuple of tuples format\n kwargs: dict\n Any further parameters to pass to pd.read_sql_query, see\n its documentation\n '
import dask
import dask.dataframe as dd
if (where_tmp is not None):
where = [where_tmp.format(values) for values in where]
if (kwargs is None):
kwargs = {}
dload = dask.delayed(load_part)
parts = [dload(sql, uri, w, kwargs) for w in where]
return dd.from_delayed(parts, meta=meta) | def read_sql_query(uri, sql, where, where_tmp=None, meta=None, kwargs=None):
'\n Create a dask dataframe from SQL using explicit partitioning\n\n Parameters\n ----------\n uri: str\n connection string (sql sqlalchemy documentation)\n sql: str\n SQL query to execute\n where: list of str or list of tuple\n Either a set of explicit partitioning statements (e.g.,\n `"WHERE index_col < 50"`...) or pairs of valued to be entered into\n where_template, if using\n where_tmp: str (optional)\n Template for generating partition selection clauses, using the\n values from where_values, e.g.,\n `"WHERE index_col >= {} AND index_col < {}"`\n meta: dataframe metadata (optional)\n If given, a zero-length version of the dataframe structure, with\n index and column names and types correctly specified. Can also be\n the same information in dictionary or tuple of tuples format\n kwargs: dict\n Any further parameters to pass to pd.read_sql_query, see\n its documentation\n '
import dask
import dask.dataframe as dd
if (where_tmp is not None):
where = [where_tmp.format(values) for values in where]
if (kwargs is None):
kwargs = {}
dload = dask.delayed(load_part)
parts = [dload(sql, uri, w, kwargs) for w in where]
return dd.from_delayed(parts, meta=meta)<|docstring|>Create a dask dataframe from SQL using explicit partitioning
Parameters
----------
uri: str
connection string (sql sqlalchemy documentation)
sql: str
SQL query to execute
where: list of str or list of tuple
Either a set of explicit partitioning statements (e.g.,
`"WHERE index_col < 50"`...) or pairs of valued to be entered into
where_template, if using
where_tmp: str (optional)
Template for generating partition selection clauses, using the
values from where_values, e.g.,
`"WHERE index_col >= {} AND index_col < {}"`
meta: dataframe metadata (optional)
If given, a zero-length version of the dataframe structure, with
index and column names and types correctly specified. Can also be
the same information in dictionary or tuple of tuples format
kwargs: dict
Any further parameters to pass to pd.read_sql_query, see
its documentation<|endoftext|> |
9a78e38224fb194077db782185115f7e08123857a04a68c72f3ca9596ad87163 | def make_ibis_client(uri):
'\n Create an ibis client from a SQLAlchemy connection string.\n\n Currently targets existing ibis backends that use SQLAlchemy, namely\n MySQL\n PostgreSQL\n SQLite\n\n Parameters\n ----------\n uri: str\n connection string (sql sqlalchemy documentation)\n\n Returns\n -------\n A tuple of client, supports_schemas\n '
import sqlalchemy
url = sqlalchemy.engine.url.make_url(uri)
dialect = url.get_dialect()
name = dialect.name
if (name == 'postgresql'):
import ibis
return (ibis.postgres.connect(url=uri), True)
elif (name == 'mysql'):
import ibis
return (ibis.mysql.connect(url=uri), True)
elif (name == 'sqlite'):
import ibis
return (ibis.sqlite.connect(path=url.database), False)
else:
raise ValueError(f'Unable to create an ibis connection for {uri}') | Create an ibis client from a SQLAlchemy connection string.
Currently targets existing ibis backends that use SQLAlchemy, namely
MySQL
PostgreSQL
SQLite
Parameters
----------
uri: str
connection string (sql sqlalchemy documentation)
Returns
-------
A tuple of client, supports_schemas | intake_sql/intake_sql.py | make_ibis_client | mounte/intake-sql | 14 | python | def make_ibis_client(uri):
'\n Create an ibis client from a SQLAlchemy connection string.\n\n Currently targets existing ibis backends that use SQLAlchemy, namely\n MySQL\n PostgreSQL\n SQLite\n\n Parameters\n ----------\n uri: str\n connection string (sql sqlalchemy documentation)\n\n Returns\n -------\n A tuple of client, supports_schemas\n '
import sqlalchemy
url = sqlalchemy.engine.url.make_url(uri)
dialect = url.get_dialect()
name = dialect.name
if (name == 'postgresql'):
import ibis
return (ibis.postgres.connect(url=uri), True)
elif (name == 'mysql'):
import ibis
return (ibis.mysql.connect(url=uri), True)
elif (name == 'sqlite'):
import ibis
return (ibis.sqlite.connect(path=url.database), False)
else:
raise ValueError(f'Unable to create an ibis connection for {uri}') | def make_ibis_client(uri):
'\n Create an ibis client from a SQLAlchemy connection string.\n\n Currently targets existing ibis backends that use SQLAlchemy, namely\n MySQL\n PostgreSQL\n SQLite\n\n Parameters\n ----------\n uri: str\n connection string (sql sqlalchemy documentation)\n\n Returns\n -------\n A tuple of client, supports_schemas\n '
import sqlalchemy
url = sqlalchemy.engine.url.make_url(uri)
dialect = url.get_dialect()
name = dialect.name
if (name == 'postgresql'):
import ibis
return (ibis.postgres.connect(url=uri), True)
elif (name == 'mysql'):
import ibis
return (ibis.mysql.connect(url=uri), True)
elif (name == 'sqlite'):
import ibis
return (ibis.sqlite.connect(path=url.database), False)
else:
raise ValueError(f'Unable to create an ibis connection for {uri}')<|docstring|>Create an ibis client from a SQLAlchemy connection string.
Currently targets existing ibis backends that use SQLAlchemy, namely
MySQL
PostgreSQL
SQLite
Parameters
----------
uri: str
connection string (sql sqlalchemy documentation)
Returns
-------
A tuple of client, supports_schemas<|endoftext|> |
9a715f53ca846c6577f8ec855b260dacbb402de374335029cd34990d7a314a14 | def to_ibis(self):
'\n Create an ibis expression for the data source.\n The sql_expr for the source must be a table, not a table expression.\n The ibis expression is not partitioned.\n '
(client, supports_schemas) = make_ibis_client(self._uri)
schema = self._sql_kwargs.get('schema')
schema_kwargs = ({'schema': schema} if supports_schemas else {})
if (self._sql_expr not in client.list_tables(**schema_kwargs)):
raise ValueError('Only full tables can be used in to_ibis')
else:
return client.table(self._sql_expr, **schema_kwargs) | Create an ibis expression for the data source.
The sql_expr for the source must be a table, not a table expression.
The ibis expression is not partitioned. | intake_sql/intake_sql.py | to_ibis | mounte/intake-sql | 14 | python | def to_ibis(self):
'\n Create an ibis expression for the data source.\n The sql_expr for the source must be a table, not a table expression.\n The ibis expression is not partitioned.\n '
(client, supports_schemas) = make_ibis_client(self._uri)
schema = self._sql_kwargs.get('schema')
schema_kwargs = ({'schema': schema} if supports_schemas else {})
if (self._sql_expr not in client.list_tables(**schema_kwargs)):
raise ValueError('Only full tables can be used in to_ibis')
else:
return client.table(self._sql_expr, **schema_kwargs) | def to_ibis(self):
'\n Create an ibis expression for the data source.\n The sql_expr for the source must be a table, not a table expression.\n The ibis expression is not partitioned.\n '
(client, supports_schemas) = make_ibis_client(self._uri)
schema = self._sql_kwargs.get('schema')
schema_kwargs = ({'schema': schema} if supports_schemas else {})
if (self._sql_expr not in client.list_tables(**schema_kwargs)):
raise ValueError('Only full tables can be used in to_ibis')
else:
return client.table(self._sql_expr, **schema_kwargs)<|docstring|>Create an ibis expression for the data source.
The sql_expr for the source must be a table, not a table expression.
The ibis expression is not partitioned.<|endoftext|> |
9a715f53ca846c6577f8ec855b260dacbb402de374335029cd34990d7a314a14 | def to_ibis(self):
'\n Create an ibis expression for the data source.\n The sql_expr for the source must be a table, not a table expression.\n The ibis expression is not partitioned.\n '
(client, supports_schemas) = make_ibis_client(self._uri)
schema = self._sql_kwargs.get('schema')
schema_kwargs = ({'schema': schema} if supports_schemas else {})
if (self._sql_expr not in client.list_tables(**schema_kwargs)):
raise ValueError('Only full tables can be used in to_ibis')
else:
return client.table(self._sql_expr, **schema_kwargs) | Create an ibis expression for the data source.
The sql_expr for the source must be a table, not a table expression.
The ibis expression is not partitioned. | intake_sql/intake_sql.py | to_ibis | mounte/intake-sql | 14 | python | def to_ibis(self):
'\n Create an ibis expression for the data source.\n The sql_expr for the source must be a table, not a table expression.\n The ibis expression is not partitioned.\n '
(client, supports_schemas) = make_ibis_client(self._uri)
schema = self._sql_kwargs.get('schema')
schema_kwargs = ({'schema': schema} if supports_schemas else {})
if (self._sql_expr not in client.list_tables(**schema_kwargs)):
raise ValueError('Only full tables can be used in to_ibis')
else:
return client.table(self._sql_expr, **schema_kwargs) | def to_ibis(self):
'\n Create an ibis expression for the data source.\n The sql_expr for the source must be a table, not a table expression.\n The ibis expression is not partitioned.\n '
(client, supports_schemas) = make_ibis_client(self._uri)
schema = self._sql_kwargs.get('schema')
schema_kwargs = ({'schema': schema} if supports_schemas else {})
if (self._sql_expr not in client.list_tables(**schema_kwargs)):
raise ValueError('Only full tables can be used in to_ibis')
else:
return client.table(self._sql_expr, **schema_kwargs)<|docstring|>Create an ibis expression for the data source.
The sql_expr for the source must be a table, not a table expression.
The ibis expression is not partitioned.<|endoftext|> |
83b1733200b4287dc3865833a88b359d68de40cd2dcaa4eeb0a76f65ae30f053 | def __init__(self, fn):
'Config helper reads/writes .ini files.\n '
self.__config = {}
self.__fname = fn
conf_name = self.__fname
if os.path.exists(self.__fname):
conf_stream = open(self.__fname, 'r')
cpa = ConfigParser()
if PY3:
cpa.read_file(conf_stream, source=conf_name)
else:
cpa.readfp(conf_stream, conf_name)
for ese in cpa.sections():
for eva in cpa.options(ese):
self.__set(ese, eva, cpa.get(ese, eva)) | Config helper reads/writes .ini files. | src/Ioticiser/Config.py | __init__ | aniknarayan/ioticiser_new | 0 | python | def __init__(self, fn):
'\n '
self.__config = {}
self.__fname = fn
conf_name = self.__fname
if os.path.exists(self.__fname):
conf_stream = open(self.__fname, 'r')
cpa = ConfigParser()
if PY3:
cpa.read_file(conf_stream, source=conf_name)
else:
cpa.readfp(conf_stream, conf_name)
for ese in cpa.sections():
for eva in cpa.options(ese):
self.__set(ese, eva, cpa.get(ese, eva)) | def __init__(self, fn):
'\n '
self.__config = {}
self.__fname = fn
conf_name = self.__fname
if os.path.exists(self.__fname):
conf_stream = open(self.__fname, 'r')
cpa = ConfigParser()
if PY3:
cpa.read_file(conf_stream, source=conf_name)
else:
cpa.readfp(conf_stream, conf_name)
for ese in cpa.sections():
for eva in cpa.options(ese):
self.__set(ese, eva, cpa.get(ese, eva))<|docstring|>Config helper reads/writes .ini files.<|endoftext|> |
10cbff836e8e31eb5818e8333812940ac6e9f05f163093737ecb1696f1f73028 | def get(self, section, val=None):
'Get a setting or the default\n\n `Returns` The current value of the setting `val` or the default, or `None` if not found\n Or dictionary of whole section if val is None\n\n `section` (mandatory) (string) the section name in the config E.g. `"agent"`\n\n `val` (optional) (string) the section name in the config E.g. `"host"`\n '
if (section in self.__config):
if (val is None):
return self.__config[section]
if (val in self.__config[section]):
val = val.lower()
return self.__config[section][val]
return None | Get a setting or the default
`Returns` The current value of the setting `val` or the default, or `None` if not found
Or dictionary of whole section if val is None
`section` (mandatory) (string) the section name in the config E.g. `"agent"`
`val` (optional) (string) the section name in the config E.g. `"host"` | src/Ioticiser/Config.py | get | aniknarayan/ioticiser_new | 0 | python | def get(self, section, val=None):
'Get a setting or the default\n\n `Returns` The current value of the setting `val` or the default, or `None` if not found\n Or dictionary of whole section if val is None\n\n `section` (mandatory) (string) the section name in the config E.g. `"agent"`\n\n `val` (optional) (string) the section name in the config E.g. `"host"`\n '
if (section in self.__config):
if (val is None):
return self.__config[section]
if (val in self.__config[section]):
val = val.lower()
return self.__config[section][val]
return None | def get(self, section, val=None):
'Get a setting or the default\n\n `Returns` The current value of the setting `val` or the default, or `None` if not found\n Or dictionary of whole section if val is None\n\n `section` (mandatory) (string) the section name in the config E.g. `"agent"`\n\n `val` (optional) (string) the section name in the config E.g. `"host"`\n '
if (section in self.__config):
if (val is None):
return self.__config[section]
if (val in self.__config[section]):
val = val.lower()
return self.__config[section][val]
return None<|docstring|>Get a setting or the default
`Returns` The current value of the setting `val` or the default, or `None` if not found
Or dictionary of whole section if val is None
`section` (mandatory) (string) the section name in the config E.g. `"agent"`
`val` (optional) (string) the section name in the config E.g. `"host"`<|endoftext|> |
92ecf1b3c7be7bbe1590318e50ea7f039da6ff38477a96e9f7ab21ef08212e64 | def __set(self, section, val, data):
'Add a setting to the config\n\n `section` (mandatory) (string) the section name in the config E.g. `"agent"`\n\n `val` (mandatory) (string) the section name in the config E.g. `"host"`\n\n `data` (mandatory) (as appropriate) the new value for the `val`\n '
val = val.lower()
if (section not in self.__config):
self.__config[section] = {}
self.__config[section][val] = data | Add a setting to the config
`section` (mandatory) (string) the section name in the config E.g. `"agent"`
`val` (mandatory) (string) the section name in the config E.g. `"host"`
`data` (mandatory) (as appropriate) the new value for the `val` | src/Ioticiser/Config.py | __set | aniknarayan/ioticiser_new | 0 | python | def __set(self, section, val, data):
'Add a setting to the config\n\n `section` (mandatory) (string) the section name in the config E.g. `"agent"`\n\n `val` (mandatory) (string) the section name in the config E.g. `"host"`\n\n `data` (mandatory) (as appropriate) the new value for the `val`\n '
val = val.lower()
if (section not in self.__config):
self.__config[section] = {}
self.__config[section][val] = data | def __set(self, section, val, data):
'Add a setting to the config\n\n `section` (mandatory) (string) the section name in the config E.g. `"agent"`\n\n `val` (mandatory) (string) the section name in the config E.g. `"host"`\n\n `data` (mandatory) (as appropriate) the new value for the `val`\n '
val = val.lower()
if (section not in self.__config):
self.__config[section] = {}
self.__config[section][val] = data<|docstring|>Add a setting to the config
`section` (mandatory) (string) the section name in the config E.g. `"agent"`
`val` (mandatory) (string) the section name in the config E.g. `"host"`
`data` (mandatory) (as appropriate) the new value for the `val`<|endoftext|> |
39a558f2e4c3540d809062796ca05e89d81a70593e23a29e61d60ed7eb90ad6b | @property
def total_depth(self):
"\n method to return the total depth of this lesson's structure\n (the max level of nested children)\n :return: integer representation of child depth\n "
if self.sub_lessons:
max_depth = 0
for sub_lesson in self.sub_lessons.all():
max_depth = max(max_depth, sub_lesson.total_depth)
return (max_depth + 1)
else:
return 1 | method to return the total depth of this lesson's structure
(the max level of nested children)
:return: integer representation of child depth | src/apps/core/models/ModuleModels.py | total_depth | HydroLearn/HydroLearn | 0 | python | @property
def total_depth(self):
"\n method to return the total depth of this lesson's structure\n (the max level of nested children)\n :return: integer representation of child depth\n "
if self.sub_lessons:
max_depth = 0
for sub_lesson in self.sub_lessons.all():
max_depth = max(max_depth, sub_lesson.total_depth)
return (max_depth + 1)
else:
return 1 | @property
def total_depth(self):
"\n method to return the total depth of this lesson's structure\n (the max level of nested children)\n :return: integer representation of child depth\n "
if self.sub_lessons:
max_depth = 0
for sub_lesson in self.sub_lessons.all():
max_depth = max(max_depth, sub_lesson.total_depth)
return (max_depth + 1)
else:
return 1<|docstring|>method to return the total depth of this lesson's structure
(the max level of nested children)
:return: integer representation of child depth<|endoftext|> |
f30cea291fea9a8823677d6920f5d10da662c2d60e3e0b39ddad5e59ff4ebced | def derivation(self):
"\n Method to copy a published lesson instance and set\n derivation attributes to point to this lesson and it's creator\n\n :return: new lesson instance with attributes set to link to derived lesson\n "
derivation = self.copy()
derivation.derived_date = now()
derivation.derived_lesson_slug = self.slug
derivation.derived_lesson_creator = self.created_by
return derivation | Method to copy a published lesson instance and set
derivation attributes to point to this lesson and it's creator
:return: new lesson instance with attributes set to link to derived lesson | src/apps/core/models/ModuleModels.py | derivation | HydroLearn/HydroLearn | 0 | python | def derivation(self):
"\n Method to copy a published lesson instance and set\n derivation attributes to point to this lesson and it's creator\n\n :return: new lesson instance with attributes set to link to derived lesson\n "
derivation = self.copy()
derivation.derived_date = now()
derivation.derived_lesson_slug = self.slug
derivation.derived_lesson_creator = self.created_by
return derivation | def derivation(self):
"\n Method to copy a published lesson instance and set\n derivation attributes to point to this lesson and it's creator\n\n :return: new lesson instance with attributes set to link to derived lesson\n "
derivation = self.copy()
derivation.derived_date = now()
derivation.derived_lesson_slug = self.slug
derivation.derived_lesson_creator = self.created_by
return derivation<|docstring|>Method to copy a published lesson instance and set
derivation attributes to point to this lesson and it's creator
:return: new lesson instance with attributes set to link to derived lesson<|endoftext|> |
8e300d01b8e2e18ab5632150f1df28bafe4454f9a3f649a6b30ff063d0f33d01 | def copy(self, maintain_ref=False):
"\n generate a new (unsaved) lesson instance based on this lesson, with a fresh ref_id if specified.\n\n Notes:\n The newly generated instance:\n - removes reference to parent\n - marks 'position' as 0\n - and sets 'is_deleted' to False\n\n Additionally, this method does not copy placeholder(content), tags, collaborators, or\n child-objects (use copy_content (or copy_children for children) after save to do this)\n\n\n :return: a new (unsaved) copy of this lesson\n "
new_instance = Lesson(parent_lesson=None, position=0, is_deleted=False, name=self.name, short_name=self.short_name)
if maintain_ref:
new_instance.ref_id = self.ref_id
if self.derived_date:
new_instance.derived_date = self.derived_date
new_instance.derived_lesson_slug = self.derived_lesson_slug
new_instance.derived_lesson_creator = self.derived_lesson_creator
return new_instance | generate a new (unsaved) lesson instance based on this lesson, with a fresh ref_id if specified.
Notes:
The newly generated instance:
- removes reference to parent
- marks 'position' as 0
- and sets 'is_deleted' to False
Additionally, this method does not copy placeholder(content), tags, collaborators, or
child-objects (use copy_content (or copy_children for children) after save to do this)
:return: a new (unsaved) copy of this lesson | src/apps/core/models/ModuleModels.py | copy | HydroLearn/HydroLearn | 0 | python | def copy(self, maintain_ref=False):
"\n generate a new (unsaved) lesson instance based on this lesson, with a fresh ref_id if specified.\n\n Notes:\n The newly generated instance:\n - removes reference to parent\n - marks 'position' as 0\n - and sets 'is_deleted' to False\n\n Additionally, this method does not copy placeholder(content), tags, collaborators, or\n child-objects (use copy_content (or copy_children for children) after save to do this)\n\n\n :return: a new (unsaved) copy of this lesson\n "
new_instance = Lesson(parent_lesson=None, position=0, is_deleted=False, name=self.name, short_name=self.short_name)
if maintain_ref:
new_instance.ref_id = self.ref_id
if self.derived_date:
new_instance.derived_date = self.derived_date
new_instance.derived_lesson_slug = self.derived_lesson_slug
new_instance.derived_lesson_creator = self.derived_lesson_creator
return new_instance | def copy(self, maintain_ref=False):
"\n generate a new (unsaved) lesson instance based on this lesson, with a fresh ref_id if specified.\n\n Notes:\n The newly generated instance:\n - removes reference to parent\n - marks 'position' as 0\n - and sets 'is_deleted' to False\n\n Additionally, this method does not copy placeholder(content), tags, collaborators, or\n child-objects (use copy_content (or copy_children for children) after save to do this)\n\n\n :return: a new (unsaved) copy of this lesson\n "
new_instance = Lesson(parent_lesson=None, position=0, is_deleted=False, name=self.name, short_name=self.short_name)
if maintain_ref:
new_instance.ref_id = self.ref_id
if self.derived_date:
new_instance.derived_date = self.derived_date
new_instance.derived_lesson_slug = self.derived_lesson_slug
new_instance.derived_lesson_creator = self.derived_lesson_creator
return new_instance<|docstring|>generate a new (unsaved) lesson instance based on this lesson, with a fresh ref_id if specified.
Notes:
The newly generated instance:
- removes reference to parent
- marks 'position' as 0
- and sets 'is_deleted' to False
Additionally, this method does not copy placeholder(content), tags, collaborators, or
child-objects (use copy_content (or copy_children for children) after save to do this)
:return: a new (unsaved) copy of this lesson<|endoftext|> |
50acbdbbbb214cc9f5e2a548ba262fac2bdbe275dccaf32a54fa601d8c95cc2d | def copy_children(self, from_instance, maintain_ref=False):
'\n Copy child relations (sub_lessons/sections) from a passed lesson, with the option of specifying\n if the ref_id should be maintained. this should only happen during publishing.\n\n :param from_instance: Lesson instance from which the child relations are provided.\n :param maintain_ref: Boolean representing if the ref_id should be maintained on the child objects, this should only be true in the case of publication.\n\n :return: None\n '
self.sections.delete()
self.sub_lessons.delete()
for section_item in from_instance.sections.all():
new_section = section_item.copy(maintain_ref)
new_section.lesson = self
new_section.position = section_item.position
new_section.save()
new_section.copy_content(section_item)
new_section.copy_children(section_item, maintain_ref)
for sub_lesson in from_instance.sub_lessons.all():
new_lesson = sub_lesson.copy(maintain_ref)
new_lesson.parent_lesson = self
new_lesson.position = sub_lesson.position
new_lesson.save()
new_lesson.copy_content(sub_lesson)
new_lesson.copy_children(sub_lesson, maintain_ref)
for app_ref in from_instance.app_refs.all():
new_ref = AppReference(app_name=app_ref.app_name, app_link=app_ref.app_link, lesson=self)
new_ref.save()
for learning_obj in from_instance.learning_objectives.all():
new_lo = Learning_Objective(lesson=self, condition=learning_obj.condition, task=learning_obj.task, degree=learning_obj.degree, verb=learning_obj.verb)
new_lo.save()
for outcome in learning_obj.outcomes.all():
new_lo.outcomes.add(outcome) | Copy child relations (sub_lessons/sections) from a passed lesson, with the option of specifying
if the ref_id should be maintained. this should only happen during publishing.
:param from_instance: Lesson instance from which the child relations are provided.
:param maintain_ref: Boolean representing if the ref_id should be maintained on the child objects, this should only be true in the case of publication.
:return: None | src/apps/core/models/ModuleModels.py | copy_children | HydroLearn/HydroLearn | 0 | python | def copy_children(self, from_instance, maintain_ref=False):
'\n Copy child relations (sub_lessons/sections) from a passed lesson, with the option of specifying\n if the ref_id should be maintained. this should only happen during publishing.\n\n :param from_instance: Lesson instance from which the child relations are provided.\n :param maintain_ref: Boolean representing if the ref_id should be maintained on the child objects, this should only be true in the case of publication.\n\n :return: None\n '
self.sections.delete()
self.sub_lessons.delete()
for section_item in from_instance.sections.all():
new_section = section_item.copy(maintain_ref)
new_section.lesson = self
new_section.position = section_item.position
new_section.save()
new_section.copy_content(section_item)
new_section.copy_children(section_item, maintain_ref)
for sub_lesson in from_instance.sub_lessons.all():
new_lesson = sub_lesson.copy(maintain_ref)
new_lesson.parent_lesson = self
new_lesson.position = sub_lesson.position
new_lesson.save()
new_lesson.copy_content(sub_lesson)
new_lesson.copy_children(sub_lesson, maintain_ref)
for app_ref in from_instance.app_refs.all():
new_ref = AppReference(app_name=app_ref.app_name, app_link=app_ref.app_link, lesson=self)
new_ref.save()
for learning_obj in from_instance.learning_objectives.all():
new_lo = Learning_Objective(lesson=self, condition=learning_obj.condition, task=learning_obj.task, degree=learning_obj.degree, verb=learning_obj.verb)
new_lo.save()
for outcome in learning_obj.outcomes.all():
new_lo.outcomes.add(outcome) | def copy_children(self, from_instance, maintain_ref=False):
'\n Copy child relations (sub_lessons/sections) from a passed lesson, with the option of specifying\n if the ref_id should be maintained. this should only happen during publishing.\n\n :param from_instance: Lesson instance from which the child relations are provided.\n :param maintain_ref: Boolean representing if the ref_id should be maintained on the child objects, this should only be true in the case of publication.\n\n :return: None\n '
self.sections.delete()
self.sub_lessons.delete()
for section_item in from_instance.sections.all():
new_section = section_item.copy(maintain_ref)
new_section.lesson = self
new_section.position = section_item.position
new_section.save()
new_section.copy_content(section_item)
new_section.copy_children(section_item, maintain_ref)
for sub_lesson in from_instance.sub_lessons.all():
new_lesson = sub_lesson.copy(maintain_ref)
new_lesson.parent_lesson = self
new_lesson.position = sub_lesson.position
new_lesson.save()
new_lesson.copy_content(sub_lesson)
new_lesson.copy_children(sub_lesson, maintain_ref)
for app_ref in from_instance.app_refs.all():
new_ref = AppReference(app_name=app_ref.app_name, app_link=app_ref.app_link, lesson=self)
new_ref.save()
for learning_obj in from_instance.learning_objectives.all():
new_lo = Learning_Objective(lesson=self, condition=learning_obj.condition, task=learning_obj.task, degree=learning_obj.degree, verb=learning_obj.verb)
new_lo.save()
for outcome in learning_obj.outcomes.all():
new_lo.outcomes.add(outcome)<|docstring|>Copy child relations (sub_lessons/sections) from a passed lesson, with the option of specifying
if the ref_id should be maintained. this should only happen during publishing.
:param from_instance: Lesson instance from which the child relations are provided.
:param maintain_ref: Boolean representing if the ref_id should be maintained on the child objects, this should only be true in the case of publication.
:return: None<|endoftext|> |
3da095a37f6c27574c90bba825b4997dae877e857d867218e223229cc2d0ae21 | def copy_content(self, from_instance):
'\n copy content including tags, and placeholder plugins to this instance from a passed Lesson\n\n :param from_instance: a Lesson object the content/tags are being copied from\n :return: None\n '
self.tags.add(*list(from_instance.tags.names()))
self.summary.clear()
plugins = from_instance.summary.get_plugins_list()
copy_plugins_to(plugins, self.summary, no_signals=True) | copy content including tags, and placeholder plugins to this instance from a passed Lesson
:param from_instance: a Lesson object the content/tags are being copied from
:return: None | src/apps/core/models/ModuleModels.py | copy_content | HydroLearn/HydroLearn | 0 | python | def copy_content(self, from_instance):
'\n copy content including tags, and placeholder plugins to this instance from a passed Lesson\n\n :param from_instance: a Lesson object the content/tags are being copied from\n :return: None\n '
self.tags.add(*list(from_instance.tags.names()))
self.summary.clear()
plugins = from_instance.summary.get_plugins_list()
copy_plugins_to(plugins, self.summary, no_signals=True) | def copy_content(self, from_instance):
'\n copy content including tags, and placeholder plugins to this instance from a passed Lesson\n\n :param from_instance: a Lesson object the content/tags are being copied from\n :return: None\n '
self.tags.add(*list(from_instance.tags.names()))
self.summary.clear()
plugins = from_instance.summary.get_plugins_list()
copy_plugins_to(plugins, self.summary, no_signals=True)<|docstring|>copy content including tags, and placeholder plugins to this instance from a passed Lesson
:param from_instance: a Lesson object the content/tags are being copied from
:return: None<|endoftext|> |
ab95994d7a61dddbfa247e3a6090a990e06e2f190e13f86d5ec018e952703396 | def get_owner(self):
"\n get the owner of the lesson (created-by), if this is a child lesson\n return the owner of it's parent\n :return: user who created the root lesson\n "
if self.parent_lesson:
return self.parent_lesson.get_Publishable_parent().get_owner()
else:
return self.created_by | get the owner of the lesson (created-by), if this is a child lesson
return the owner of it's parent
:return: user who created the root lesson | src/apps/core/models/ModuleModels.py | get_owner | HydroLearn/HydroLearn | 0 | python | def get_owner(self):
"\n get the owner of the lesson (created-by), if this is a child lesson\n return the owner of it's parent\n :return: user who created the root lesson\n "
if self.parent_lesson:
return self.parent_lesson.get_Publishable_parent().get_owner()
else:
return self.created_by | def get_owner(self):
"\n get the owner of the lesson (created-by), if this is a child lesson\n return the owner of it's parent\n :return: user who created the root lesson\n "
if self.parent_lesson:
return self.parent_lesson.get_Publishable_parent().get_owner()
else:
return self.created_by<|docstring|>get the owner of the lesson (created-by), if this is a child lesson
return the owner of it's parent
:return: user who created the root lesson<|endoftext|> |
956106a591f46b5891bddb075b0da83bf309439331f7d472a69127baf2771c68 | def qid_shape(val: Any, default: TDefault=RaiseTypeErrorIfNotProvided) -> Union[(Tuple[(int, ...)], TDefault)]:
"Returns a tuple describing the number of quantum levels of each\n qubit/qudit/qid `val` operates on.\n\n Args:\n val: The value to get the shape of.\n default: Determines the fallback behavior when `val` doesn't have\n a shape. If `default` is not set, a TypeError is raised. If\n default is set to a value, that value is returned.\n\n Returns:\n If `val` has a `_qid_shape_` method and its result is not\n NotImplemented, that result is returned. Otherwise, if `val` has a\n `_num_qubits_` method, the shape with `num_qubits` qubits is returned\n e.g. `(2,)*num_qubits`. If neither method returns a value other than\n NotImplemented and a default value was specified, the default value is\n returned.\n\n Raises:\n TypeError: `val` doesn't have either a `_qid_shape_` or a `_num_qubits_`\n method (or they returned NotImplemented) and also no default value\n was specified.\n "
getter = getattr(val, '_qid_shape_', None)
result = (NotImplemented if (getter is None) else getter())
if (result is not NotImplemented):
return result
if (isinstance(val, Sequence) and all((isinstance(q, ops.Qid) for q in val))):
return tuple((q.dimension for q in val))
num_getter = getattr(val, '_num_qubits_', None)
num_qubits = (NotImplemented if (num_getter is None) else num_getter())
if (num_qubits is not NotImplemented):
return ((2,) * num_qubits)
if (default is not RaiseTypeErrorIfNotProvided):
return default
if (getter is not None):
raise TypeError("object of type '{}' does have a _qid_shape_ method, but it returned NotImplemented.".format(type(val)))
if (num_getter is not None):
raise TypeError("object of type '{}' does have a _num_qubits_ method, but it returned NotImplemented.".format(type(val)))
raise TypeError(f"object of type '{type(val)}' has no _num_qubits_ or _qid_shape_ methods.") | Returns a tuple describing the number of quantum levels of each
qubit/qudit/qid `val` operates on.
Args:
val: The value to get the shape of.
default: Determines the fallback behavior when `val` doesn't have
a shape. If `default` is not set, a TypeError is raised. If
default is set to a value, that value is returned.
Returns:
If `val` has a `_qid_shape_` method and its result is not
NotImplemented, that result is returned. Otherwise, if `val` has a
`_num_qubits_` method, the shape with `num_qubits` qubits is returned
e.g. `(2,)*num_qubits`. If neither method returns a value other than
NotImplemented and a default value was specified, the default value is
returned.
Raises:
TypeError: `val` doesn't have either a `_qid_shape_` or a `_num_qubits_`
method (or they returned NotImplemented) and also no default value
was specified. | cirq-core/cirq/protocols/qid_shape_protocol.py | qid_shape | viathor/Cirq | 3,326 | python | def qid_shape(val: Any, default: TDefault=RaiseTypeErrorIfNotProvided) -> Union[(Tuple[(int, ...)], TDefault)]:
"Returns a tuple describing the number of quantum levels of each\n qubit/qudit/qid `val` operates on.\n\n Args:\n val: The value to get the shape of.\n default: Determines the fallback behavior when `val` doesn't have\n a shape. If `default` is not set, a TypeError is raised. If\n default is set to a value, that value is returned.\n\n Returns:\n If `val` has a `_qid_shape_` method and its result is not\n NotImplemented, that result is returned. Otherwise, if `val` has a\n `_num_qubits_` method, the shape with `num_qubits` qubits is returned\n e.g. `(2,)*num_qubits`. If neither method returns a value other than\n NotImplemented and a default value was specified, the default value is\n returned.\n\n Raises:\n TypeError: `val` doesn't have either a `_qid_shape_` or a `_num_qubits_`\n method (or they returned NotImplemented) and also no default value\n was specified.\n "
getter = getattr(val, '_qid_shape_', None)
result = (NotImplemented if (getter is None) else getter())
if (result is not NotImplemented):
return result
if (isinstance(val, Sequence) and all((isinstance(q, ops.Qid) for q in val))):
return tuple((q.dimension for q in val))
num_getter = getattr(val, '_num_qubits_', None)
num_qubits = (NotImplemented if (num_getter is None) else num_getter())
if (num_qubits is not NotImplemented):
return ((2,) * num_qubits)
if (default is not RaiseTypeErrorIfNotProvided):
return default
if (getter is not None):
raise TypeError("object of type '{}' does have a _qid_shape_ method, but it returned NotImplemented.".format(type(val)))
if (num_getter is not None):
raise TypeError("object of type '{}' does have a _num_qubits_ method, but it returned NotImplemented.".format(type(val)))
raise TypeError(f"object of type '{type(val)}' has no _num_qubits_ or _qid_shape_ methods.") | def qid_shape(val: Any, default: TDefault=RaiseTypeErrorIfNotProvided) -> Union[(Tuple[(int, ...)], TDefault)]:
"Returns a tuple describing the number of quantum levels of each\n qubit/qudit/qid `val` operates on.\n\n Args:\n val: The value to get the shape of.\n default: Determines the fallback behavior when `val` doesn't have\n a shape. If `default` is not set, a TypeError is raised. If\n default is set to a value, that value is returned.\n\n Returns:\n If `val` has a `_qid_shape_` method and its result is not\n NotImplemented, that result is returned. Otherwise, if `val` has a\n `_num_qubits_` method, the shape with `num_qubits` qubits is returned\n e.g. `(2,)*num_qubits`. If neither method returns a value other than\n NotImplemented and a default value was specified, the default value is\n returned.\n\n Raises:\n TypeError: `val` doesn't have either a `_qid_shape_` or a `_num_qubits_`\n method (or they returned NotImplemented) and also no default value\n was specified.\n "
getter = getattr(val, '_qid_shape_', None)
result = (NotImplemented if (getter is None) else getter())
if (result is not NotImplemented):
return result
if (isinstance(val, Sequence) and all((isinstance(q, ops.Qid) for q in val))):
return tuple((q.dimension for q in val))
num_getter = getattr(val, '_num_qubits_', None)
num_qubits = (NotImplemented if (num_getter is None) else num_getter())
if (num_qubits is not NotImplemented):
return ((2,) * num_qubits)
if (default is not RaiseTypeErrorIfNotProvided):
return default
if (getter is not None):
raise TypeError("object of type '{}' does have a _qid_shape_ method, but it returned NotImplemented.".format(type(val)))
if (num_getter is not None):
raise TypeError("object of type '{}' does have a _num_qubits_ method, but it returned NotImplemented.".format(type(val)))
raise TypeError(f"object of type '{type(val)}' has no _num_qubits_ or _qid_shape_ methods.")<|docstring|>Returns a tuple describing the number of quantum levels of each
qubit/qudit/qid `val` operates on.
Args:
val: The value to get the shape of.
default: Determines the fallback behavior when `val` doesn't have
a shape. If `default` is not set, a TypeError is raised. If
default is set to a value, that value is returned.
Returns:
If `val` has a `_qid_shape_` method and its result is not
NotImplemented, that result is returned. Otherwise, if `val` has a
`_num_qubits_` method, the shape with `num_qubits` qubits is returned
e.g. `(2,)*num_qubits`. If neither method returns a value other than
NotImplemented and a default value was specified, the default value is
returned.
Raises:
TypeError: `val` doesn't have either a `_qid_shape_` or a `_num_qubits_`
method (or they returned NotImplemented) and also no default value
was specified.<|endoftext|> |
db478900bb9cef8697d91dc872c4dabcfe5679441218189ff8ab81e50a7367ee | def num_qubits(val: Any, default: TDefault=RaiseTypeErrorIfNotProvidedInt) -> Union[(int, TDefault)]:
"Returns the number of qubits, qudits, or qids `val` operates on.\n\n Args:\n val: The value to get the number of qubits from.\n default: Determines the fallback behavior when `val` doesn't have\n a number of qubits. If `default` is not set, a TypeError is raised.\n If default is set to a value, that value is returned.\n\n Returns:\n If `val` has a `_num_qubits_` method and its result is not\n NotImplemented, that result is returned. Otherwise, if `val` has a\n `_qid_shape_` method, the number of qubits is computed from the length\n of the shape and returned e.g. `len(shape)`. If neither method returns a\n value other than NotImplemented and a default value was specified, the\n default value is returned.\n\n Raises:\n TypeError: `val` doesn't have either a `_num_qubits_` or a `_qid_shape_`\n method (or they returned NotImplemented) and also no default value\n was specified.\n "
num_getter = getattr(val, '_num_qubits_', None)
num_qubits = (NotImplemented if (num_getter is None) else num_getter())
if (num_qubits is not NotImplemented):
return num_qubits
getter = getattr(val, '_qid_shape_', None)
shape = (NotImplemented if (getter is None) else getter())
if (shape is not NotImplemented):
return len(shape)
if (isinstance(val, Sequence) and all((isinstance(q, ops.Qid) for q in val))):
return len(val)
if (default is not RaiseTypeErrorIfNotProvidedInt):
return default
if (num_getter is not None):
raise TypeError("object of type '{}' does have a _num_qubits_ method, but it returned NotImplemented.".format(type(val)))
if (getter is not None):
raise TypeError("object of type '{}' does have a _qid_shape_ method, but it returned NotImplemented.".format(type(val)))
raise TypeError(f"object of type '{type(val)}' has no _num_qubits_ or _qid_shape_ methods.") | Returns the number of qubits, qudits, or qids `val` operates on.
Args:
val: The value to get the number of qubits from.
default: Determines the fallback behavior when `val` doesn't have
a number of qubits. If `default` is not set, a TypeError is raised.
If default is set to a value, that value is returned.
Returns:
If `val` has a `_num_qubits_` method and its result is not
NotImplemented, that result is returned. Otherwise, if `val` has a
`_qid_shape_` method, the number of qubits is computed from the length
of the shape and returned e.g. `len(shape)`. If neither method returns a
value other than NotImplemented and a default value was specified, the
default value is returned.
Raises:
TypeError: `val` doesn't have either a `_num_qubits_` or a `_qid_shape_`
method (or they returned NotImplemented) and also no default value
was specified. | cirq-core/cirq/protocols/qid_shape_protocol.py | num_qubits | viathor/Cirq | 3,326 | python | def num_qubits(val: Any, default: TDefault=RaiseTypeErrorIfNotProvidedInt) -> Union[(int, TDefault)]:
"Returns the number of qubits, qudits, or qids `val` operates on.\n\n Args:\n val: The value to get the number of qubits from.\n default: Determines the fallback behavior when `val` doesn't have\n a number of qubits. If `default` is not set, a TypeError is raised.\n If default is set to a value, that value is returned.\n\n Returns:\n If `val` has a `_num_qubits_` method and its result is not\n NotImplemented, that result is returned. Otherwise, if `val` has a\n `_qid_shape_` method, the number of qubits is computed from the length\n of the shape and returned e.g. `len(shape)`. If neither method returns a\n value other than NotImplemented and a default value was specified, the\n default value is returned.\n\n Raises:\n TypeError: `val` doesn't have either a `_num_qubits_` or a `_qid_shape_`\n method (or they returned NotImplemented) and also no default value\n was specified.\n "
num_getter = getattr(val, '_num_qubits_', None)
num_qubits = (NotImplemented if (num_getter is None) else num_getter())
if (num_qubits is not NotImplemented):
return num_qubits
getter = getattr(val, '_qid_shape_', None)
shape = (NotImplemented if (getter is None) else getter())
if (shape is not NotImplemented):
return len(shape)
if (isinstance(val, Sequence) and all((isinstance(q, ops.Qid) for q in val))):
return len(val)
if (default is not RaiseTypeErrorIfNotProvidedInt):
return default
if (num_getter is not None):
raise TypeError("object of type '{}' does have a _num_qubits_ method, but it returned NotImplemented.".format(type(val)))
if (getter is not None):
raise TypeError("object of type '{}' does have a _qid_shape_ method, but it returned NotImplemented.".format(type(val)))
raise TypeError(f"object of type '{type(val)}' has no _num_qubits_ or _qid_shape_ methods.") | def num_qubits(val: Any, default: TDefault=RaiseTypeErrorIfNotProvidedInt) -> Union[(int, TDefault)]:
"Returns the number of qubits, qudits, or qids `val` operates on.\n\n Args:\n val: The value to get the number of qubits from.\n default: Determines the fallback behavior when `val` doesn't have\n a number of qubits. If `default` is not set, a TypeError is raised.\n If default is set to a value, that value is returned.\n\n Returns:\n If `val` has a `_num_qubits_` method and its result is not\n NotImplemented, that result is returned. Otherwise, if `val` has a\n `_qid_shape_` method, the number of qubits is computed from the length\n of the shape and returned e.g. `len(shape)`. If neither method returns a\n value other than NotImplemented and a default value was specified, the\n default value is returned.\n\n Raises:\n TypeError: `val` doesn't have either a `_num_qubits_` or a `_qid_shape_`\n method (or they returned NotImplemented) and also no default value\n was specified.\n "
num_getter = getattr(val, '_num_qubits_', None)
num_qubits = (NotImplemented if (num_getter is None) else num_getter())
if (num_qubits is not NotImplemented):
return num_qubits
getter = getattr(val, '_qid_shape_', None)
shape = (NotImplemented if (getter is None) else getter())
if (shape is not NotImplemented):
return len(shape)
if (isinstance(val, Sequence) and all((isinstance(q, ops.Qid) for q in val))):
return len(val)
if (default is not RaiseTypeErrorIfNotProvidedInt):
return default
if (num_getter is not None):
raise TypeError("object of type '{}' does have a _num_qubits_ method, but it returned NotImplemented.".format(type(val)))
if (getter is not None):
raise TypeError("object of type '{}' does have a _qid_shape_ method, but it returned NotImplemented.".format(type(val)))
raise TypeError(f"object of type '{type(val)}' has no _num_qubits_ or _qid_shape_ methods.")<|docstring|>Returns the number of qubits, qudits, or qids `val` operates on.
Args:
val: The value to get the number of qubits from.
default: Determines the fallback behavior when `val` doesn't have
a number of qubits. If `default` is not set, a TypeError is raised.
If default is set to a value, that value is returned.
Returns:
If `val` has a `_num_qubits_` method and its result is not
NotImplemented, that result is returned. Otherwise, if `val` has a
`_qid_shape_` method, the number of qubits is computed from the length
of the shape and returned e.g. `len(shape)`. If neither method returns a
value other than NotImplemented and a default value was specified, the
default value is returned.
Raises:
TypeError: `val` doesn't have either a `_num_qubits_` or a `_qid_shape_`
method (or they returned NotImplemented) and also no default value
was specified.<|endoftext|> |
ab1307b0151a7cf610b359ff1ee5bea7ac0c4bfc2a77220e9ba1578ae4e934fd | @doc_private
def _qid_shape_(self) -> Union[(Tuple[(int, ...)], NotImplementedType)]:
'A tuple specifying the number of quantum levels of each qid this\n object operates on, e.g. (2, 2, 2) for a three-qubit gate.\n\n This method is used by the global `cirq.qid_shape` method (and by\n `cirq.num_qubits` if `_num_qubits_` is not defined). If this\n method is not present, or returns NotImplemented, it is assumed that the\n receiving object operates on qubits. (The ability to return\n NotImplemented is useful when a class cannot know if it has a shape\n until runtime.)\n\n The order of values in the tuple is always implicit with respect to the\n object being called. For example, for gates the tuple must be ordered\n with respect to the list of qubits that the gate is applied to. For\n operations, the tuple is ordered to match the list returned by its\n `qubits` attribute.\n\n Returns:\n The qid shape of this value, or NotImplemented if the shape is\n unknown.\n ' | A tuple specifying the number of quantum levels of each qid this
object operates on, e.g. (2, 2, 2) for a three-qubit gate.
This method is used by the global `cirq.qid_shape` method (and by
`cirq.num_qubits` if `_num_qubits_` is not defined). If this
method is not present, or returns NotImplemented, it is assumed that the
receiving object operates on qubits. (The ability to return
NotImplemented is useful when a class cannot know if it has a shape
until runtime.)
The order of values in the tuple is always implicit with respect to the
object being called. For example, for gates the tuple must be ordered
with respect to the list of qubits that the gate is applied to. For
operations, the tuple is ordered to match the list returned by its
`qubits` attribute.
Returns:
The qid shape of this value, or NotImplemented if the shape is
unknown. | cirq-core/cirq/protocols/qid_shape_protocol.py | _qid_shape_ | viathor/Cirq | 3,326 | python | @doc_private
def _qid_shape_(self) -> Union[(Tuple[(int, ...)], NotImplementedType)]:
'A tuple specifying the number of quantum levels of each qid this\n object operates on, e.g. (2, 2, 2) for a three-qubit gate.\n\n This method is used by the global `cirq.qid_shape` method (and by\n `cirq.num_qubits` if `_num_qubits_` is not defined). If this\n method is not present, or returns NotImplemented, it is assumed that the\n receiving object operates on qubits. (The ability to return\n NotImplemented is useful when a class cannot know if it has a shape\n until runtime.)\n\n The order of values in the tuple is always implicit with respect to the\n object being called. For example, for gates the tuple must be ordered\n with respect to the list of qubits that the gate is applied to. For\n operations, the tuple is ordered to match the list returned by its\n `qubits` attribute.\n\n Returns:\n The qid shape of this value, or NotImplemented if the shape is\n unknown.\n ' | @doc_private
def _qid_shape_(self) -> Union[(Tuple[(int, ...)], NotImplementedType)]:
'A tuple specifying the number of quantum levels of each qid this\n object operates on, e.g. (2, 2, 2) for a three-qubit gate.\n\n This method is used by the global `cirq.qid_shape` method (and by\n `cirq.num_qubits` if `_num_qubits_` is not defined). If this\n method is not present, or returns NotImplemented, it is assumed that the\n receiving object operates on qubits. (The ability to return\n NotImplemented is useful when a class cannot know if it has a shape\n until runtime.)\n\n The order of values in the tuple is always implicit with respect to the\n object being called. For example, for gates the tuple must be ordered\n with respect to the list of qubits that the gate is applied to. For\n operations, the tuple is ordered to match the list returned by its\n `qubits` attribute.\n\n Returns:\n The qid shape of this value, or NotImplemented if the shape is\n unknown.\n '<|docstring|>A tuple specifying the number of quantum levels of each qid this
object operates on, e.g. (2, 2, 2) for a three-qubit gate.
This method is used by the global `cirq.qid_shape` method (and by
`cirq.num_qubits` if `_num_qubits_` is not defined). If this
method is not present, or returns NotImplemented, it is assumed that the
receiving object operates on qubits. (The ability to return
NotImplemented is useful when a class cannot know if it has a shape
until runtime.)
The order of values in the tuple is always implicit with respect to the
object being called. For example, for gates the tuple must be ordered
with respect to the list of qubits that the gate is applied to. For
operations, the tuple is ordered to match the list returned by its
`qubits` attribute.
Returns:
The qid shape of this value, or NotImplemented if the shape is
unknown.<|endoftext|> |
96b727df1a2b1b2e1ecb4d58a1adaaaf1b6cf5bb978a31d9344f3c9da7be6060 | @document
def _num_qubits_(self) -> Union[(int, NotImplementedType)]:
'The number of qubits, qudits, or qids this object operates on.\n\n This method is used by the global `cirq.num_qubits` method (and by\n `cirq.qid_shape` if `_qid_shape_` is not defined. If this\n method is not present, or returns NotImplemented, it will fallback\n to using the length of `_qid_shape_`.\n\n Returns:\n An integer specifying the number of qubits, qudits or qids.\n ' | The number of qubits, qudits, or qids this object operates on.
This method is used by the global `cirq.num_qubits` method (and by
`cirq.qid_shape` if `_qid_shape_` is not defined. If this
method is not present, or returns NotImplemented, it will fallback
to using the length of `_qid_shape_`.
Returns:
An integer specifying the number of qubits, qudits or qids. | cirq-core/cirq/protocols/qid_shape_protocol.py | _num_qubits_ | viathor/Cirq | 3,326 | python | @document
def _num_qubits_(self) -> Union[(int, NotImplementedType)]:
'The number of qubits, qudits, or qids this object operates on.\n\n This method is used by the global `cirq.num_qubits` method (and by\n `cirq.qid_shape` if `_qid_shape_` is not defined. If this\n method is not present, or returns NotImplemented, it will fallback\n to using the length of `_qid_shape_`.\n\n Returns:\n An integer specifying the number of qubits, qudits or qids.\n ' | @document
def _num_qubits_(self) -> Union[(int, NotImplementedType)]:
'The number of qubits, qudits, or qids this object operates on.\n\n This method is used by the global `cirq.num_qubits` method (and by\n `cirq.qid_shape` if `_qid_shape_` is not defined. If this\n method is not present, or returns NotImplemented, it will fallback\n to using the length of `_qid_shape_`.\n\n Returns:\n An integer specifying the number of qubits, qudits or qids.\n '<|docstring|>The number of qubits, qudits, or qids this object operates on.
This method is used by the global `cirq.num_qubits` method (and by
`cirq.qid_shape` if `_qid_shape_` is not defined. If this
method is not present, or returns NotImplemented, it will fallback
to using the length of `_qid_shape_`.
Returns:
An integer specifying the number of qubits, qudits or qids.<|endoftext|> |
0a27113aa0679d942f19f1671e6e5e2d7073790d780738896f8b93accfe9eba4 | def pil_to_flatten_data(img):
'\n Convert data from [(R1, G1, B1, A1), (R2, G2, B2, A2)] to [R1, G1, B1, A1, R2, G2, B2, A2]\n '
return [x for p in img.convert('RGBA').getdata() for x in p] | Convert data from [(R1, G1, B1, A1), (R2, G2, B2, A2)] to [R1, G1, B1, A1, R2, G2, B2, A2] | test_pixelmatch.py | pil_to_flatten_data | Mattwmaster58/pixelmatch-py | 0 | python | def pil_to_flatten_data(img):
'\n \n '
return [x for p in img.convert('RGBA').getdata() for x in p] | def pil_to_flatten_data(img):
'\n \n '
return [x for p in img.convert('RGBA').getdata() for x in p]<|docstring|>Convert data from [(R1, G1, B1, A1), (R2, G2, B2, A2)] to [R1, G1, B1, A1, R2, G2, B2, A2]<|endoftext|> |
edebb23ee862ead98d305747eef9713e96031a245f56c3b5884b5bd8e87de66a | def _load_lib(self):
'Load the SRES C API binary'
this_directory = os.path.join(os.path.dirname(__file__))
shared_lib = os.path.join(this_directory, f'{self._get_shared_library_prefix()}SRES{self._get_shared_library_extension()}')
lib = ct.CDLL(shared_lib)
return lib | Load the SRES C API binary | sresFromMoonfit/sres.py | _load_lib | CiaranWelsh/SRES | 1 | python | def _load_lib(self):
this_directory = os.path.join(os.path.dirname(__file__))
shared_lib = os.path.join(this_directory, f'{self._get_shared_library_prefix()}SRES{self._get_shared_library_extension()}')
lib = ct.CDLL(shared_lib)
return lib | def _load_lib(self):
this_directory = os.path.join(os.path.dirname(__file__))
shared_lib = os.path.join(this_directory, f'{self._get_shared_library_prefix()}SRES{self._get_shared_library_extension()}')
lib = ct.CDLL(shared_lib)
return lib<|docstring|>Load the SRES C API binary<|endoftext|> |
a0e6f6ad3da8aae83485b7cbc31378a27235b01b23b18a28af60e82e4406055c | def _load_func(self, funcname: str, argtypes: List, return_type) -> ct.CDLL._FuncPtr:
'Load a single function from SRES shared library'
func = self._lib.__getattr__(funcname)
func.restype = return_type
func.argtypes = argtypes
return func | Load a single function from SRES shared library | sresFromMoonfit/sres.py | _load_func | CiaranWelsh/SRES | 1 | python | def _load_func(self, funcname: str, argtypes: List, return_type) -> ct.CDLL._FuncPtr:
func = self._lib.__getattr__(funcname)
func.restype = return_type
func.argtypes = argtypes
return func | def _load_func(self, funcname: str, argtypes: List, return_type) -> ct.CDLL._FuncPtr:
func = self._lib.__getattr__(funcname)
func.restype = return_type
func.argtypes = argtypes
return func<|docstring|>Load a single function from SRES shared library<|endoftext|> |
0dfc81386c1bff136a329c12401e934d83c234ab92571ecd033a0264461f46ca | def _makeDoubleArrayPtr(self, input: List[float]):
'returns a ctypes double array from input'
ctypes_double_type = (ct.c_double * len(input))
my_double_arr = ctypes_double_type(*input)
return ct.pointer(my_double_arr) | returns a ctypes double array from input | sresFromMoonfit/sres.py | _makeDoubleArrayPtr | CiaranWelsh/SRES | 1 | python | def _makeDoubleArrayPtr(self, input: List[float]):
ctypes_double_type = (ct.c_double * len(input))
my_double_arr = ctypes_double_type(*input)
return ct.pointer(my_double_arr) | def _makeDoubleArrayPtr(self, input: List[float]):
ctypes_double_type = (ct.c_double * len(input))
my_double_arr = ctypes_double_type(*input)
return ct.pointer(my_double_arr)<|docstring|>returns a ctypes double array from input<|endoftext|> |
9207e7bb8dba940154e732bf04981faa2af56eaf0c38b6a4c45c8564e5664b62 | def _loadESInitial(self):
'\n The dimensions of the optimization problem are not known\n before loading SRES so this must be done in a method.\n :param dim:\n :return:\n '
return self._sres._load_func(funcname='ESInitial', argtypes=[ct.c_int32, ct.c_int64, ct.c_int64, self.callback(self.dim.value), ct.c_int32, ct.c_int32, ct.c_int32, ct.POINTER((ct.c_double * self.dim.value)), ct.POINTER((ct.c_double * self.dim.value)), ct.c_int32, ct.c_int32, ct.c_int32, ct.c_double, ct.c_double, ct.c_double, ct.c_int32, ct.c_int64, ct.c_int64], return_type=None) | The dimensions of the optimization problem are not known
before loading SRES so this must be done in a method.
:param dim:
:return: | sresFromMoonfit/sres.py | _loadESInitial | CiaranWelsh/SRES | 1 | python | def _loadESInitial(self):
'\n The dimensions of the optimization problem are not known\n before loading SRES so this must be done in a method.\n :param dim:\n :return:\n '
return self._sres._load_func(funcname='ESInitial', argtypes=[ct.c_int32, ct.c_int64, ct.c_int64, self.callback(self.dim.value), ct.c_int32, ct.c_int32, ct.c_int32, ct.POINTER((ct.c_double * self.dim.value)), ct.POINTER((ct.c_double * self.dim.value)), ct.c_int32, ct.c_int32, ct.c_int32, ct.c_double, ct.c_double, ct.c_double, ct.c_int32, ct.c_int64, ct.c_int64], return_type=None) | def _loadESInitial(self):
'\n The dimensions of the optimization problem are not known\n before loading SRES so this must be done in a method.\n :param dim:\n :return:\n '
return self._sres._load_func(funcname='ESInitial', argtypes=[ct.c_int32, ct.c_int64, ct.c_int64, self.callback(self.dim.value), ct.c_int32, ct.c_int32, ct.c_int32, ct.POINTER((ct.c_double * self.dim.value)), ct.POINTER((ct.c_double * self.dim.value)), ct.c_int32, ct.c_int32, ct.c_int32, ct.c_double, ct.c_double, ct.c_double, ct.c_int32, ct.c_int64, ct.c_int64], return_type=None)<|docstring|>The dimensions of the optimization problem are not known
before loading SRES so this must be done in a method.
:param dim:
:return:<|endoftext|> |
78be8885e4af633c2113436ab56de56e2b35e28424dc267e75fc256d5d619f11 | def handle_error(method: BotXMethod, response: HTTPResponse) -> NoReturn:
'Handle "file deleted" error response.\n\n Arguments:\n method: method which was made before error.\n response: HTTP response from BotX API.\n\n Raises:\n FileDeletedError: raised always.\n '
parsed_response = APIErrorResponse[FileDeletedErrorData].parse_obj(response.json_body)
error_data = parsed_response.error_data
raise FileDeletedError(url=method.url, method=method.http_method, response_content=response.json_body, status_content=response.status_code, error_description=error_data.error_description) | Handle "file deleted" error response.
Arguments:
method: method which was made before error.
response: HTTP response from BotX API.
Raises:
FileDeletedError: raised always. | botx/clients/methods/errors/files/file_deleted.py | handle_error | ExpressApp/pybotx | 13 | python | def handle_error(method: BotXMethod, response: HTTPResponse) -> NoReturn:
'Handle "file deleted" error response.\n\n Arguments:\n method: method which was made before error.\n response: HTTP response from BotX API.\n\n Raises:\n FileDeletedError: raised always.\n '
parsed_response = APIErrorResponse[FileDeletedErrorData].parse_obj(response.json_body)
error_data = parsed_response.error_data
raise FileDeletedError(url=method.url, method=method.http_method, response_content=response.json_body, status_content=response.status_code, error_description=error_data.error_description) | def handle_error(method: BotXMethod, response: HTTPResponse) -> NoReturn:
'Handle "file deleted" error response.\n\n Arguments:\n method: method which was made before error.\n response: HTTP response from BotX API.\n\n Raises:\n FileDeletedError: raised always.\n '
parsed_response = APIErrorResponse[FileDeletedErrorData].parse_obj(response.json_body)
error_data = parsed_response.error_data
raise FileDeletedError(url=method.url, method=method.http_method, response_content=response.json_body, status_content=response.status_code, error_description=error_data.error_description)<|docstring|>Handle "file deleted" error response.
Arguments:
method: method which was made before error.
response: HTTP response from BotX API.
Raises:
FileDeletedError: raised always.<|endoftext|> |
58a1b53f4589b4474bad3a971b913ce4a30a09def92217816d0ebae17799dba1 | @pytest.fixture
def teams_and_members(review_teams):
'Fixture with a dictionary contain a few teams with member lists.'
return {'one': ['first', 'second'], 'two': ['two'], 'last_team': [str(i) for i in range(10)], **review_teams} | Fixture with a dictionary contain a few teams with member lists. | tests/unit_tests/repobee/plugin_tests/test_github.py | teams_and_members | slarse/repobee | 39 | python | @pytest.fixture
def teams_and_members(review_teams):
return {'one': ['first', 'second'], 'two': ['two'], 'last_team': [str(i) for i in range(10)], **review_teams} | @pytest.fixture
def teams_and_members(review_teams):
return {'one': ['first', 'second'], 'two': ['two'], 'last_team': [str(i) for i in range(10)], **review_teams}<|docstring|>Fixture with a dictionary contain a few teams with member lists.<|endoftext|> |
7cac3eadb146d794052cc846ea76554e7165748494923df90ff6893819ca5a6b | @pytest.fixture
def happy_github(mocker, monkeypatch, teams_and_members):
'mock of github.Github which raises no exceptions and returns the\n correct values.\n '
github_instance = MagicMock()
github_instance.get_user.side_effect = (lambda user: (User(login=user) if (user in [USER, NOT_MEMBER]) else raise_404()))
type(github_instance).oauth_scopes = PropertyMock(return_value=REQUIRED_TOKEN_SCOPES)
usernames = set(itertools.chain(*[members for (_, members) in teams_and_members.items()]))
def get_user(username):
if (username in [*usernames, USER, NOT_MEMBER]):
user = MagicMock(spec=github.NamedUser.NamedUser)
type(user).login = PropertyMock(return_value=username)
return user
else:
raise_404()
github_instance.get_user.side_effect = get_user
monkeypatch.setattr(github, 'GithubException', GithubException)
mocker.patch('github.Github', side_effect=(lambda login_or_token, base_url: github_instance))
return github_instance | mock of github.Github which raises no exceptions and returns the
correct values. | tests/unit_tests/repobee/plugin_tests/test_github.py | happy_github | slarse/repobee | 39 | python | @pytest.fixture
def happy_github(mocker, monkeypatch, teams_and_members):
'mock of github.Github which raises no exceptions and returns the\n correct values.\n '
github_instance = MagicMock()
github_instance.get_user.side_effect = (lambda user: (User(login=user) if (user in [USER, NOT_MEMBER]) else raise_404()))
type(github_instance).oauth_scopes = PropertyMock(return_value=REQUIRED_TOKEN_SCOPES)
usernames = set(itertools.chain(*[members for (_, members) in teams_and_members.items()]))
def get_user(username):
if (username in [*usernames, USER, NOT_MEMBER]):
user = MagicMock(spec=github.NamedUser.NamedUser)
type(user).login = PropertyMock(return_value=username)
return user
else:
raise_404()
github_instance.get_user.side_effect = get_user
monkeypatch.setattr(github, 'GithubException', GithubException)
mocker.patch('github.Github', side_effect=(lambda login_or_token, base_url: github_instance))
return github_instance | @pytest.fixture
def happy_github(mocker, monkeypatch, teams_and_members):
'mock of github.Github which raises no exceptions and returns the\n correct values.\n '
github_instance = MagicMock()
github_instance.get_user.side_effect = (lambda user: (User(login=user) if (user in [USER, NOT_MEMBER]) else raise_404()))
type(github_instance).oauth_scopes = PropertyMock(return_value=REQUIRED_TOKEN_SCOPES)
usernames = set(itertools.chain(*[members for (_, members) in teams_and_members.items()]))
def get_user(username):
if (username in [*usernames, USER, NOT_MEMBER]):
user = MagicMock(spec=github.NamedUser.NamedUser)
type(user).login = PropertyMock(return_value=username)
return user
else:
raise_404()
github_instance.get_user.side_effect = get_user
monkeypatch.setattr(github, 'GithubException', GithubException)
mocker.patch('github.Github', side_effect=(lambda login_or_token, base_url: github_instance))
return github_instance<|docstring|>mock of github.Github which raises no exceptions and returns the
correct values.<|endoftext|> |
340e0e6d4f4fc4d7126f8adc666a060d51d6af0d688208f324d5617cb340e08c | @pytest.fixture
def organization(happy_github):
'Attaches an Organization mock to github.Github.get_organization, and\n returns the mock.\n '
return create_mock_organization(happy_github, ORG_NAME, ['blablabla', 'hello', USER]) | Attaches an Organization mock to github.Github.get_organization, and
returns the mock. | tests/unit_tests/repobee/plugin_tests/test_github.py | organization | slarse/repobee | 39 | python | @pytest.fixture
def organization(happy_github):
'Attaches an Organization mock to github.Github.get_organization, and\n returns the mock.\n '
return create_mock_organization(happy_github, ORG_NAME, ['blablabla', 'hello', USER]) | @pytest.fixture
def organization(happy_github):
'Attaches an Organization mock to github.Github.get_organization, and\n returns the mock.\n '
return create_mock_organization(happy_github, ORG_NAME, ['blablabla', 'hello', USER])<|docstring|>Attaches an Organization mock to github.Github.get_organization, and
returns the mock.<|endoftext|> |
153c0280ee8b2a9ce436adf6a16e309f81ce7382e7f69b3bb21d118bdf1f0989 | def mock_team(name):
'create a mock team that tracks its members.'
team = MagicMock()
members = set()
team.get_members.side_effect = (lambda : list(members))
team.add_membership.side_effect = (lambda user: members.add(user))
type(team).name = PropertyMock(return_value=name)
type(team).id = PropertyMock(return_value=hash(name))
return team | create a mock team that tracks its members. | tests/unit_tests/repobee/plugin_tests/test_github.py | mock_team | slarse/repobee | 39 | python | def mock_team(name):
team = MagicMock()
members = set()
team.get_members.side_effect = (lambda : list(members))
team.add_membership.side_effect = (lambda user: members.add(user))
type(team).name = PropertyMock(return_value=name)
type(team).id = PropertyMock(return_value=hash(name))
return team | def mock_team(name):
team = MagicMock()
members = set()
team.get_members.side_effect = (lambda : list(members))
team.add_membership.side_effect = (lambda user: members.add(user))
type(team).name = PropertyMock(return_value=name)
type(team).id = PropertyMock(return_value=hash(name))
return team<|docstring|>create a mock team that tracks its members.<|endoftext|> |
645b4fe77c49bec9ca99c591e9e0873ffb830a7e7387172ea91812b78880c55f | @pytest.fixture
def no_teams(organization):
'A fixture that sets up the teams functionality without adding any\n teams.\n '
ids_to_teams = {}
organization.get_team.side_effect = (lambda team_id: (ids_to_teams[team_id] if (team_id in ids_to_teams) else raise_404()))
organization.get_teams.side_effect = (lambda : list(teams_))
teams_ = []
def create_team(name, permission):
nonlocal teams_, ids_to_teams
assert (permission in ['push', 'pull'])
if (name in [team.name for team in teams_]):
raise_422()
team = mock_team(name)
ids_to_teams[team.id] = team
teams_.append(team)
return team
organization.create_team.side_effect = create_team
return teams_ | A fixture that sets up the teams functionality without adding any
teams. | tests/unit_tests/repobee/plugin_tests/test_github.py | no_teams | slarse/repobee | 39 | python | @pytest.fixture
def no_teams(organization):
'A fixture that sets up the teams functionality without adding any\n teams.\n '
ids_to_teams = {}
organization.get_team.side_effect = (lambda team_id: (ids_to_teams[team_id] if (team_id in ids_to_teams) else raise_404()))
organization.get_teams.side_effect = (lambda : list(teams_))
teams_ = []
def create_team(name, permission):
nonlocal teams_, ids_to_teams
assert (permission in ['push', 'pull'])
if (name in [team.name for team in teams_]):
raise_422()
team = mock_team(name)
ids_to_teams[team.id] = team
teams_.append(team)
return team
organization.create_team.side_effect = create_team
return teams_ | @pytest.fixture
def no_teams(organization):
'A fixture that sets up the teams functionality without adding any\n teams.\n '
ids_to_teams = {}
organization.get_team.side_effect = (lambda team_id: (ids_to_teams[team_id] if (team_id in ids_to_teams) else raise_404()))
organization.get_teams.side_effect = (lambda : list(teams_))
teams_ = []
def create_team(name, permission):
nonlocal teams_, ids_to_teams
assert (permission in ['push', 'pull'])
if (name in [team.name for team in teams_]):
raise_422()
team = mock_team(name)
ids_to_teams[team.id] = team
teams_.append(team)
return team
organization.create_team.side_effect = create_team
return teams_<|docstring|>A fixture that sets up the teams functionality without adding any
teams.<|endoftext|> |
e78b312e8aab8a1fb624badff707ed131f2da97add95901995e5b8a5653810e8 | @pytest.fixture
def teams(organization, no_teams, teams_and_members):
'A fixture that returns a list of teams, which are all returned by the\n github.Organization.Organization.get_teams function.'
team_names = teams_and_members.keys()
for name in team_names:
organization.create_team(name, permission='push')
return no_teams | A fixture that returns a list of teams, which are all returned by the
github.Organization.Organization.get_teams function. | tests/unit_tests/repobee/plugin_tests/test_github.py | teams | slarse/repobee | 39 | python | @pytest.fixture
def teams(organization, no_teams, teams_and_members):
'A fixture that returns a list of teams, which are all returned by the\n github.Organization.Organization.get_teams function.'
team_names = teams_and_members.keys()
for name in team_names:
organization.create_team(name, permission='push')
return no_teams | @pytest.fixture
def teams(organization, no_teams, teams_and_members):
'A fixture that returns a list of teams, which are all returned by the\n github.Organization.Organization.get_teams function.'
team_names = teams_and_members.keys()
for name in team_names:
organization.create_team(name, permission='push')
return no_teams<|docstring|>A fixture that returns a list of teams, which are all returned by the
github.Organization.Organization.get_teams function.<|endoftext|> |
11a4d36c8365b09faaae34dd8fc079291072b306a285c58f26d27ba10c43d5ed | @pytest.fixture
def issues(repos):
'Adds two issues to all repos such that Repo.get_issues returns the\n issues. One issue is expected to be closed and has title CLOSE_ISSUE.title\n and is marked with, while the other is expected not to be closed and has\n title DONT_CLOSE_ISSUE.title.\n '
def attach_issues(repo):
open_issue_mocks = [to_magic_mock_issue(issue) for issue in OPEN_ISSUES]
closed_issue_mocks = [to_magic_mock_issue(issue) for issue in CLOSED_ISSUES]
repo.get_issues.side_effect = (lambda state: (open_issue_mocks if (state == 'open') else closed_issue_mocks))
return (open_issue_mocks + closed_issue_mocks)
issues = []
for repo in repos:
issues.extend(attach_issues(repo))
return issues | Adds two issues to all repos such that Repo.get_issues returns the
issues. One issue is expected to be closed and has title CLOSE_ISSUE.title
and is marked with, while the other is expected not to be closed and has
title DONT_CLOSE_ISSUE.title. | tests/unit_tests/repobee/plugin_tests/test_github.py | issues | slarse/repobee | 39 | python | @pytest.fixture
def issues(repos):
'Adds two issues to all repos such that Repo.get_issues returns the\n issues. One issue is expected to be closed and has title CLOSE_ISSUE.title\n and is marked with, while the other is expected not to be closed and has\n title DONT_CLOSE_ISSUE.title.\n '
def attach_issues(repo):
open_issue_mocks = [to_magic_mock_issue(issue) for issue in OPEN_ISSUES]
closed_issue_mocks = [to_magic_mock_issue(issue) for issue in CLOSED_ISSUES]
repo.get_issues.side_effect = (lambda state: (open_issue_mocks if (state == 'open') else closed_issue_mocks))
return (open_issue_mocks + closed_issue_mocks)
issues = []
for repo in repos:
issues.extend(attach_issues(repo))
return issues | @pytest.fixture
def issues(repos):
'Adds two issues to all repos such that Repo.get_issues returns the\n issues. One issue is expected to be closed and has title CLOSE_ISSUE.title\n and is marked with, while the other is expected not to be closed and has\n title DONT_CLOSE_ISSUE.title.\n '
def attach_issues(repo):
open_issue_mocks = [to_magic_mock_issue(issue) for issue in OPEN_ISSUES]
closed_issue_mocks = [to_magic_mock_issue(issue) for issue in CLOSED_ISSUES]
repo.get_issues.side_effect = (lambda state: (open_issue_mocks if (state == 'open') else closed_issue_mocks))
return (open_issue_mocks + closed_issue_mocks)
issues = []
for repo in repos:
issues.extend(attach_issues(repo))
return issues<|docstring|>Adds two issues to all repos such that Repo.get_issues returns the
issues. One issue is expected to be closed and has title CLOSE_ISSUE.title
and is marked with, while the other is expected not to be closed and has
title DONT_CLOSE_ISSUE.title.<|endoftext|> |
f14569ed189bc4eae97cbe419b7f65b02b0bb8efda3304b689ea3acbab6c50e8 | @pytest.fixture(params=['get_user', 'get_organization'])
def github_bad_info(request, api, happy_github):
'Fixture with a github instance that raises GithubException 404 when\n use the user, base_url and org_name arguments to .\n '
getattr(happy_github, request.param).side_effect = raise_404
return happy_github | Fixture with a github instance that raises GithubException 404 when
use the user, base_url and org_name arguments to . | tests/unit_tests/repobee/plugin_tests/test_github.py | github_bad_info | slarse/repobee | 39 | python | @pytest.fixture(params=['get_user', 'get_organization'])
def github_bad_info(request, api, happy_github):
'Fixture with a github instance that raises GithubException 404 when\n use the user, base_url and org_name arguments to .\n '
getattr(happy_github, request.param).side_effect = raise_404
return happy_github | @pytest.fixture(params=['get_user', 'get_organization'])
def github_bad_info(request, api, happy_github):
'Fixture with a github instance that raises GithubException 404 when\n use the user, base_url and org_name arguments to .\n '
getattr(happy_github, request.param).side_effect = raise_404
return happy_github<|docstring|>Fixture with a github instance that raises GithubException 404 when
use the user, base_url and org_name arguments to .<|endoftext|> |
a48e029a8180961bfc1a420834677912d52be3238e8bc642c0af3e4ddbbaa274 | def test_get_all_repos(self, api, repos):
'Calling get_repos without an argument should return all repos.'
assert (len(list(api.get_repos())) == len(repos)) | Calling get_repos without an argument should return all repos. | tests/unit_tests/repobee/plugin_tests/test_github.py | test_get_all_repos | slarse/repobee | 39 | python | def test_get_all_repos(self, api, repos):
assert (len(list(api.get_repos())) == len(repos)) | def test_get_all_repos(self, api, repos):
assert (len(list(api.get_repos())) == len(repos))<|docstring|>Calling get_repos without an argument should return all repos.<|endoftext|> |
e79e2b1235e1ece9a32a5656dd220752e797954d7cf96f1fa325224f1cdbb07f | def test_with_students(self, repos, api):
'Test that supplying students causes student repo names to be\n generated as the Cartesian product of the supplied repo names and the\n students.\n '
students = list(constants.STUDENTS)
assignment_names = [repo.name for repo in repos]
expected_repo_names = plug.generate_repo_names(students, assignment_names)
expected_urls = api.get_repo_urls(expected_repo_names)
actual_urls = api.get_repo_urls(assignment_names, team_names=[t.name for t in students])
assert (len(actual_urls) == (len(students) * len(assignment_names)))
assert (sorted(expected_urls) == sorted(actual_urls)) | Test that supplying students causes student repo names to be
generated as the Cartesian product of the supplied repo names and the
students. | tests/unit_tests/repobee/plugin_tests/test_github.py | test_with_students | slarse/repobee | 39 | python | def test_with_students(self, repos, api):
'Test that supplying students causes student repo names to be\n generated as the Cartesian product of the supplied repo names and the\n students.\n '
students = list(constants.STUDENTS)
assignment_names = [repo.name for repo in repos]
expected_repo_names = plug.generate_repo_names(students, assignment_names)
expected_urls = api.get_repo_urls(expected_repo_names)
actual_urls = api.get_repo_urls(assignment_names, team_names=[t.name for t in students])
assert (len(actual_urls) == (len(students) * len(assignment_names)))
assert (sorted(expected_urls) == sorted(actual_urls)) | def test_with_students(self, repos, api):
'Test that supplying students causes student repo names to be\n generated as the Cartesian product of the supplied repo names and the\n students.\n '
students = list(constants.STUDENTS)
assignment_names = [repo.name for repo in repos]
expected_repo_names = plug.generate_repo_names(students, assignment_names)
expected_urls = api.get_repo_urls(expected_repo_names)
actual_urls = api.get_repo_urls(assignment_names, team_names=[t.name for t in students])
assert (len(actual_urls) == (len(students) * len(assignment_names)))
assert (sorted(expected_urls) == sorted(actual_urls))<|docstring|>Test that supplying students causes student repo names to be
generated as the Cartesian product of the supplied repo names and the
students.<|endoftext|> |
405f2c1638fe35f64f2b8adbc7f2604f1b457b765080f8cda4800f27bd606c0f | def test_happy_path(self, happy_github, organization, api):
'Tests that no exceptions are raised when all info is correct.'
github_plugin.GitHubAPI.verify_settings(USER, ORG_NAME, BASE_URL, TOKEN) | Tests that no exceptions are raised when all info is correct. | tests/unit_tests/repobee/plugin_tests/test_github.py | test_happy_path | slarse/repobee | 39 | python | def test_happy_path(self, happy_github, organization, api):
github_plugin.GitHubAPI.verify_settings(USER, ORG_NAME, BASE_URL, TOKEN) | def test_happy_path(self, happy_github, organization, api):
github_plugin.GitHubAPI.verify_settings(USER, ORG_NAME, BASE_URL, TOKEN)<|docstring|>Tests that no exceptions are raised when all info is correct.<|endoftext|> |
2800590c0b9e81f003c481ca5f0effb859af5e58edc157efc87d0e93c7b3aa97 | def test_none_user_raises(self, happy_github, organization, api):
'If NamedUser.login is None, there should be an exception. Happens if\n you provide a URL that points to a GitHub instance, but not to the API\n endpoint.\n '
happy_github.get_user.side_effect = (lambda _: User(login=None))
with pytest.raises(plug.UnexpectedException) as exc_info:
github_plugin.GitHubAPI.verify_settings(USER, ORG_NAME, BASE_URL, TOKEN)
assert ('Possible reasons: bad api url' in str(exc_info.value)) | If NamedUser.login is None, there should be an exception. Happens if
you provide a URL that points to a GitHub instance, but not to the API
endpoint. | tests/unit_tests/repobee/plugin_tests/test_github.py | test_none_user_raises | slarse/repobee | 39 | python | def test_none_user_raises(self, happy_github, organization, api):
'If NamedUser.login is None, there should be an exception. Happens if\n you provide a URL that points to a GitHub instance, but not to the API\n endpoint.\n '
happy_github.get_user.side_effect = (lambda _: User(login=None))
with pytest.raises(plug.UnexpectedException) as exc_info:
github_plugin.GitHubAPI.verify_settings(USER, ORG_NAME, BASE_URL, TOKEN)
assert ('Possible reasons: bad api url' in str(exc_info.value)) | def test_none_user_raises(self, happy_github, organization, api):
'If NamedUser.login is None, there should be an exception. Happens if\n you provide a URL that points to a GitHub instance, but not to the API\n endpoint.\n '
happy_github.get_user.side_effect = (lambda _: User(login=None))
with pytest.raises(plug.UnexpectedException) as exc_info:
github_plugin.GitHubAPI.verify_settings(USER, ORG_NAME, BASE_URL, TOKEN)
assert ('Possible reasons: bad api url' in str(exc_info.value))<|docstring|>If NamedUser.login is None, there should be an exception. Happens if
you provide a URL that points to a GitHub instance, but not to the API
endpoint.<|endoftext|> |
6c9cefbcda9ecdb32fd5a238db33779d4d1d7507410f6c3a4cc9d5dd3d3d775f | def test_mismatching_user_login_raises(self, happy_github, organization, api):
"I'm not sure if this can happen, but since the None-user thing\n happened, better safe than sorry.\n "
wrong_username = (USER + 'other')
happy_github.get_user.side_effect = (lambda username: User((username + 'other')))
expected_messages = ["Specified login is {}, but the fetched user's login is {}".format(USER, wrong_username), 'Possible reasons: unknown']
with pytest.raises(plug.UnexpectedException) as exc_info:
github_plugin.GitHubAPI.verify_settings(USER, ORG_NAME, BASE_URL, TOKEN)
for msg in expected_messages:
assert (msg in str(exc_info.value)) | I'm not sure if this can happen, but since the None-user thing
happened, better safe than sorry. | tests/unit_tests/repobee/plugin_tests/test_github.py | test_mismatching_user_login_raises | slarse/repobee | 39 | python | def test_mismatching_user_login_raises(self, happy_github, organization, api):
"I'm not sure if this can happen, but since the None-user thing\n happened, better safe than sorry.\n "
wrong_username = (USER + 'other')
happy_github.get_user.side_effect = (lambda username: User((username + 'other')))
expected_messages = ["Specified login is {}, but the fetched user's login is {}".format(USER, wrong_username), 'Possible reasons: unknown']
with pytest.raises(plug.UnexpectedException) as exc_info:
github_plugin.GitHubAPI.verify_settings(USER, ORG_NAME, BASE_URL, TOKEN)
for msg in expected_messages:
assert (msg in str(exc_info.value)) | def test_mismatching_user_login_raises(self, happy_github, organization, api):
"I'm not sure if this can happen, but since the None-user thing\n happened, better safe than sorry.\n "
wrong_username = (USER + 'other')
happy_github.get_user.side_effect = (lambda username: User((username + 'other')))
expected_messages = ["Specified login is {}, but the fetched user's login is {}".format(USER, wrong_username), 'Possible reasons: unknown']
with pytest.raises(plug.UnexpectedException) as exc_info:
github_plugin.GitHubAPI.verify_settings(USER, ORG_NAME, BASE_URL, TOKEN)
for msg in expected_messages:
assert (msg in str(exc_info.value))<|docstring|>I'm not sure if this can happen, but since the None-user thing
happened, better safe than sorry.<|endoftext|> |
5219741c47f4e6067be039ba8b1866ce53edfa6236f979f40b805ef57a1af610 | def test_sets_assignees_defaults_to_notset(self, happy_github, api):
'Assert that ``assignees = None`` is replaced with ``NotSet``.'
impl_mock = MagicMock(spec=github.Repository.Repository)
repo = plug.Repo(name='name', description='descr', private=True, url='bla', implementation=impl_mock)
with patch('_repobee.ext.defaults.github.GitHubAPI._wrap_issue', autospec=True):
api.create_issue('Title', 'Body', repo)
impl_mock.create_issue.assert_called_once_with('Title', body='Body', assignees=github.GithubObject.NotSet) | Assert that ``assignees = None`` is replaced with ``NotSet``. | tests/unit_tests/repobee/plugin_tests/test_github.py | test_sets_assignees_defaults_to_notset | slarse/repobee | 39 | python | def test_sets_assignees_defaults_to_notset(self, happy_github, api):
impl_mock = MagicMock(spec=github.Repository.Repository)
repo = plug.Repo(name='name', description='descr', private=True, url='bla', implementation=impl_mock)
with patch('_repobee.ext.defaults.github.GitHubAPI._wrap_issue', autospec=True):
api.create_issue('Title', 'Body', repo)
impl_mock.create_issue.assert_called_once_with('Title', body='Body', assignees=github.GithubObject.NotSet) | def test_sets_assignees_defaults_to_notset(self, happy_github, api):
impl_mock = MagicMock(spec=github.Repository.Repository)
repo = plug.Repo(name='name', description='descr', private=True, url='bla', implementation=impl_mock)
with patch('_repobee.ext.defaults.github.GitHubAPI._wrap_issue', autospec=True):
api.create_issue('Title', 'Body', repo)
impl_mock.create_issue.assert_called_once_with('Title', body='Body', assignees=github.GithubObject.NotSet)<|docstring|>Assert that ``assignees = None`` is replaced with ``NotSet``.<|endoftext|> |
a86c1a9ddba743685b691869ce9b5377c204efa0e57b04654bab2321765273e4 | def test_correctly_sets_provided_org(self, happy_github, api):
'Test that the provided organization is respected.'
new_org_name = 'some-other-org'
assert (api._org_name != new_org_name), 'test makes no sense if the new org name matches the existing one'
mock_org = create_mock_organization(happy_github, new_org_name, [])
new_api = api.for_organization(new_org_name)
assert (new_api.org is mock_org) | Test that the provided organization is respected. | tests/unit_tests/repobee/plugin_tests/test_github.py | test_correctly_sets_provided_org | slarse/repobee | 39 | python | def test_correctly_sets_provided_org(self, happy_github, api):
new_org_name = 'some-other-org'
assert (api._org_name != new_org_name), 'test makes no sense if the new org name matches the existing one'
mock_org = create_mock_organization(happy_github, new_org_name, [])
new_api = api.for_organization(new_org_name)
assert (new_api.org is mock_org) | def test_correctly_sets_provided_org(self, happy_github, api):
new_org_name = 'some-other-org'
assert (api._org_name != new_org_name), 'test makes no sense if the new org name matches the existing one'
mock_org = create_mock_organization(happy_github, new_org_name, [])
new_api = api.for_organization(new_org_name)
assert (new_api.org is mock_org)<|docstring|>Test that the provided organization is respected.<|endoftext|> |
49a0f97f6b811795748976906a7734e8608f6322a4463175f4c164089d93e01a | def register(session, plugins_presets={}):
'Register plugin. Called when used as an plugin.'
CreateFolders(session, plugins_presets).register() | Register plugin. Called when used as an plugin. | pype/ftrack/actions/action_create_folders.py | register | barklaya/pype | 0 | python | def register(session, plugins_presets={}):
CreateFolders(session, plugins_presets).register() | def register(session, plugins_presets={}):
CreateFolders(session, plugins_presets).register()<|docstring|>Register plugin. Called when used as an plugin.<|endoftext|> |
a0cb77205aed7c1e1c8d5c22c3086d117ab9a9543317749c671894ae8ae24f67 | def launch(self, session, entities, event):
'Callback method for custom action.'
with_childrens = True
if (self.without_interface is False):
if ('values' not in event['data']):
return
with_childrens = event['data']['values']['children_included']
entity = entities[0]
if (entity.entity_type.lower() == 'project'):
proj = entity
else:
proj = entity['project']
project_name = proj['full_name']
project_code = proj['name']
if ((entity.entity_type.lower() == 'project') and (with_childrens is False)):
return {'success': True, 'message': 'Nothing was created'}
all_entities = []
all_entities.append(entity)
if with_childrens:
all_entities = self.get_notask_children(entity)
anatomy = Anatomy(project_name)
work_keys = ['work', 'folder']
work_template = anatomy.templates
for key in work_keys:
work_template = work_template[key]
work_has_apps = ('{app' in work_template)
publish_keys = ['publish', 'folder']
publish_template = anatomy.templates
for key in publish_keys:
publish_template = publish_template[key]
publish_has_apps = ('{app' in publish_template)
presets = config.get_presets()
app_presets = presets.get('tools', {}).get('sw_folders')
cached_apps = {}
collected_paths = []
for entity in all_entities:
if (entity.entity_type.lower() == 'project'):
continue
ent_data = {'project': {'name': project_name, 'code': project_code}}
ent_data['asset'] = entity['name']
parents = entity['link'][1:(- 1)]
hierarchy_names = [p['name'] for p in parents]
hierarchy = ''
if hierarchy_names:
hierarchy = os.path.sep.join(hierarchy_names)
ent_data['hierarchy'] = hierarchy
tasks_created = False
for child in entity['children']:
if (child['object_type']['name'].lower() != 'task'):
continue
tasks_created = True
task_type_name = child['type']['name'].lower()
task_data = ent_data.copy()
task_data['task'] = child['name']
apps = []
if (app_presets and (work_has_apps or publish_has_apps)):
possible_apps = app_presets.get(task_type_name, [])
for app in possible_apps:
if (app in cached_apps):
app_dir = cached_apps[app]
else:
try:
app_data = avalonlib.get_application(app)
app_dir = app_data['application_dir']
except ValueError:
app_dir = app
cached_apps[app] = app_dir
apps.append(app_dir)
if work_has_apps:
app_data = task_data.copy()
for app in apps:
app_data['app'] = app
collected_paths.append(self.compute_template(anatomy, app_data, work_keys))
else:
collected_paths.append(self.compute_template(anatomy, task_data, work_keys))
if publish_has_apps:
app_data = task_data.copy()
for app in apps:
app_data['app'] = app
collected_paths.append(self.compute_template(anatomy, app_data, publish_keys))
else:
collected_paths.append(self.compute_template(anatomy, task_data, publish_keys))
if (not tasks_created):
collected_paths.append(self.compute_template(anatomy, ent_data, work_keys))
collected_paths.append(self.compute_template(anatomy, ent_data, publish_keys))
if (len(collected_paths) == 0):
return {'success': True, 'message': 'No project folders to create.'}
self.log.info('Creating folders:')
for path in set(collected_paths):
self.log.info(path)
if (not os.path.exists(path)):
os.makedirs(path)
return {'success': True, 'message': 'Successfully created project folders.'} | Callback method for custom action. | pype/ftrack/actions/action_create_folders.py | launch | barklaya/pype | 0 | python | def launch(self, session, entities, event):
with_childrens = True
if (self.without_interface is False):
if ('values' not in event['data']):
return
with_childrens = event['data']['values']['children_included']
entity = entities[0]
if (entity.entity_type.lower() == 'project'):
proj = entity
else:
proj = entity['project']
project_name = proj['full_name']
project_code = proj['name']
if ((entity.entity_type.lower() == 'project') and (with_childrens is False)):
return {'success': True, 'message': 'Nothing was created'}
all_entities = []
all_entities.append(entity)
if with_childrens:
all_entities = self.get_notask_children(entity)
anatomy = Anatomy(project_name)
work_keys = ['work', 'folder']
work_template = anatomy.templates
for key in work_keys:
work_template = work_template[key]
work_has_apps = ('{app' in work_template)
publish_keys = ['publish', 'folder']
publish_template = anatomy.templates
for key in publish_keys:
publish_template = publish_template[key]
publish_has_apps = ('{app' in publish_template)
presets = config.get_presets()
app_presets = presets.get('tools', {}).get('sw_folders')
cached_apps = {}
collected_paths = []
for entity in all_entities:
if (entity.entity_type.lower() == 'project'):
continue
ent_data = {'project': {'name': project_name, 'code': project_code}}
ent_data['asset'] = entity['name']
parents = entity['link'][1:(- 1)]
hierarchy_names = [p['name'] for p in parents]
hierarchy =
if hierarchy_names:
hierarchy = os.path.sep.join(hierarchy_names)
ent_data['hierarchy'] = hierarchy
tasks_created = False
for child in entity['children']:
if (child['object_type']['name'].lower() != 'task'):
continue
tasks_created = True
task_type_name = child['type']['name'].lower()
task_data = ent_data.copy()
task_data['task'] = child['name']
apps = []
if (app_presets and (work_has_apps or publish_has_apps)):
possible_apps = app_presets.get(task_type_name, [])
for app in possible_apps:
if (app in cached_apps):
app_dir = cached_apps[app]
else:
try:
app_data = avalonlib.get_application(app)
app_dir = app_data['application_dir']
except ValueError:
app_dir = app
cached_apps[app] = app_dir
apps.append(app_dir)
if work_has_apps:
app_data = task_data.copy()
for app in apps:
app_data['app'] = app
collected_paths.append(self.compute_template(anatomy, app_data, work_keys))
else:
collected_paths.append(self.compute_template(anatomy, task_data, work_keys))
if publish_has_apps:
app_data = task_data.copy()
for app in apps:
app_data['app'] = app
collected_paths.append(self.compute_template(anatomy, app_data, publish_keys))
else:
collected_paths.append(self.compute_template(anatomy, task_data, publish_keys))
if (not tasks_created):
collected_paths.append(self.compute_template(anatomy, ent_data, work_keys))
collected_paths.append(self.compute_template(anatomy, ent_data, publish_keys))
if (len(collected_paths) == 0):
return {'success': True, 'message': 'No project folders to create.'}
self.log.info('Creating folders:')
for path in set(collected_paths):
self.log.info(path)
if (not os.path.exists(path)):
os.makedirs(path)
return {'success': True, 'message': 'Successfully created project folders.'} | def launch(self, session, entities, event):
with_childrens = True
if (self.without_interface is False):
if ('values' not in event['data']):
return
with_childrens = event['data']['values']['children_included']
entity = entities[0]
if (entity.entity_type.lower() == 'project'):
proj = entity
else:
proj = entity['project']
project_name = proj['full_name']
project_code = proj['name']
if ((entity.entity_type.lower() == 'project') and (with_childrens is False)):
return {'success': True, 'message': 'Nothing was created'}
all_entities = []
all_entities.append(entity)
if with_childrens:
all_entities = self.get_notask_children(entity)
anatomy = Anatomy(project_name)
work_keys = ['work', 'folder']
work_template = anatomy.templates
for key in work_keys:
work_template = work_template[key]
work_has_apps = ('{app' in work_template)
publish_keys = ['publish', 'folder']
publish_template = anatomy.templates
for key in publish_keys:
publish_template = publish_template[key]
publish_has_apps = ('{app' in publish_template)
presets = config.get_presets()
app_presets = presets.get('tools', {}).get('sw_folders')
cached_apps = {}
collected_paths = []
for entity in all_entities:
if (entity.entity_type.lower() == 'project'):
continue
ent_data = {'project': {'name': project_name, 'code': project_code}}
ent_data['asset'] = entity['name']
parents = entity['link'][1:(- 1)]
hierarchy_names = [p['name'] for p in parents]
hierarchy =
if hierarchy_names:
hierarchy = os.path.sep.join(hierarchy_names)
ent_data['hierarchy'] = hierarchy
tasks_created = False
for child in entity['children']:
if (child['object_type']['name'].lower() != 'task'):
continue
tasks_created = True
task_type_name = child['type']['name'].lower()
task_data = ent_data.copy()
task_data['task'] = child['name']
apps = []
if (app_presets and (work_has_apps or publish_has_apps)):
possible_apps = app_presets.get(task_type_name, [])
for app in possible_apps:
if (app in cached_apps):
app_dir = cached_apps[app]
else:
try:
app_data = avalonlib.get_application(app)
app_dir = app_data['application_dir']
except ValueError:
app_dir = app
cached_apps[app] = app_dir
apps.append(app_dir)
if work_has_apps:
app_data = task_data.copy()
for app in apps:
app_data['app'] = app
collected_paths.append(self.compute_template(anatomy, app_data, work_keys))
else:
collected_paths.append(self.compute_template(anatomy, task_data, work_keys))
if publish_has_apps:
app_data = task_data.copy()
for app in apps:
app_data['app'] = app
collected_paths.append(self.compute_template(anatomy, app_data, publish_keys))
else:
collected_paths.append(self.compute_template(anatomy, task_data, publish_keys))
if (not tasks_created):
collected_paths.append(self.compute_template(anatomy, ent_data, work_keys))
collected_paths.append(self.compute_template(anatomy, ent_data, publish_keys))
if (len(collected_paths) == 0):
return {'success': True, 'message': 'No project folders to create.'}
self.log.info('Creating folders:')
for path in set(collected_paths):
self.log.info(path)
if (not os.path.exists(path)):
os.makedirs(path)
return {'success': True, 'message': 'Successfully created project folders.'}<|docstring|>Callback method for custom action.<|endoftext|> |
9dfe0b10769fe5d45ba976e75397e1b8a4e296397461c19c590609fb20d5b171 | def some_params_not_changed(self) -> bool:
"\n Compares if each of the current named parameter has changed from the values\n stored in self.cache.\n - Assumes self.cache is a dictionary that stores this model's named parameters\n - Useful to check if parameters are being modified across different iterations\n\n "
with torch.no_grad():
not_changed = {}
for (name, param) in self.named_parameters():
if torch.equal(self.cache[name], param):
d = (self.cache[name] - param)
not_changed[name] = torch.linalg.norm(d)
print(tnorm(self.cache[name]), tnorm(param))
if (len(not_changed) < 1):
return False
else:
print(f'''Not changed:
''', not_changed)
return True | Compares if each of the current named parameter has changed from the values
stored in self.cache.
- Assumes self.cache is a dictionary that stores this model's named parameters
- Useful to check if parameters are being modified across different iterations | reprlearn/models/sym_bilinear.py | some_params_not_changed | cocoaaa/ReprLearn | 0 | python | def some_params_not_changed(self) -> bool:
"\n Compares if each of the current named parameter has changed from the values\n stored in self.cache.\n - Assumes self.cache is a dictionary that stores this model's named parameters\n - Useful to check if parameters are being modified across different iterations\n\n "
with torch.no_grad():
not_changed = {}
for (name, param) in self.named_parameters():
if torch.equal(self.cache[name], param):
d = (self.cache[name] - param)
not_changed[name] = torch.linalg.norm(d)
print(tnorm(self.cache[name]), tnorm(param))
if (len(not_changed) < 1):
return False
else:
print(f'Not changed:
', not_changed)
return True | def some_params_not_changed(self) -> bool:
"\n Compares if each of the current named parameter has changed from the values\n stored in self.cache.\n - Assumes self.cache is a dictionary that stores this model's named parameters\n - Useful to check if parameters are being modified across different iterations\n\n "
with torch.no_grad():
not_changed = {}
for (name, param) in self.named_parameters():
if torch.equal(self.cache[name], param):
d = (self.cache[name] - param)
not_changed[name] = torch.linalg.norm(d)
print(tnorm(self.cache[name]), tnorm(param))
if (len(not_changed) < 1):
return False
else:
print(f'Not changed:
', not_changed)
return True<|docstring|>Compares if each of the current named parameter has changed from the values
stored in self.cache.
- Assumes self.cache is a dictionary that stores this model's named parameters
- Useful to check if parameters are being modified across different iterations<|endoftext|> |
e825de55225d553fd13b1495576e89630a307e6f166e3ae7420db6f591447a4f | def forward(self, *, s: int=None, c: int=None):
'\n s: style label; must be in {0,...,n_styles-1}\n c: content label; must be in {0,..., n_contents-1}\n '
A = self.styles
B = self.contents
if (s is not None):
A = self.styles[[s]]
if (c is not None):
B = self.contents[[c]]
out = A.matmul(self.W)
out = out.matmul(B)
out = self.non_linear_fn(out.permute(1, 0, 2, (- 2), (- 1)).squeeze())
return out | s: style label; must be in {0,...,n_styles-1}
c: content label; must be in {0,..., n_contents-1} | reprlearn/models/sym_bilinear.py | forward | cocoaaa/ReprLearn | 0 | python | def forward(self, *, s: int=None, c: int=None):
'\n s: style label; must be in {0,...,n_styles-1}\n c: content label; must be in {0,..., n_contents-1}\n '
A = self.styles
B = self.contents
if (s is not None):
A = self.styles[[s]]
if (c is not None):
B = self.contents[[c]]
out = A.matmul(self.W)
out = out.matmul(B)
out = self.non_linear_fn(out.permute(1, 0, 2, (- 2), (- 1)).squeeze())
return out | def forward(self, *, s: int=None, c: int=None):
'\n s: style label; must be in {0,...,n_styles-1}\n c: content label; must be in {0,..., n_contents-1}\n '
A = self.styles
B = self.contents
if (s is not None):
A = self.styles[[s]]
if (c is not None):
B = self.contents[[c]]
out = A.matmul(self.W)
out = out.matmul(B)
out = self.non_linear_fn(out.permute(1, 0, 2, (- 2), (- 1)).squeeze())
return out<|docstring|>s: style label; must be in {0,...,n_styles-1}
c: content label; must be in {0,..., n_contents-1}<|endoftext|> |
9bf10b0a288662515784023ee80ae027a71061a59710ebf962b78b718d906825 | def build(region_similarity_calculator_config):
'Builds region similarity calculator based on the configuration.\n\n Builds one of [IouSimilarity, IoaSimilarity, NegSqDistSimilarity] objects. See\n core/region_similarity_calculator.proto for details.\n\n Args:\n region_similarity_calculator_config: RegionSimilarityCalculator\n configuration proto.\n\n Returns:\n region_similarity_calculator: RegionSimilarityCalculator object.\n\n Raises:\n ValueError: On unknown region similarity calculator.\n '
if (not isinstance(region_similarity_calculator_config, region_similarity_calculator_pb2.RegionSimilarityCalculator)):
raise ValueError('region_similarity_calculator_config not of type region_similarity_calculator_pb2.RegionsSimilarityCalculator')
similarity_calculator = region_similarity_calculator_config.WhichOneof('region_similarity')
if (similarity_calculator == 'iou_similarity'):
return region_similarity_calculator.IouSimilarity()
if (similarity_calculator == 'ioa_similarity'):
return region_similarity_calculator.IoaSimilarity()
if (similarity_calculator == 'neg_sq_dist_similarity'):
return region_similarity_calculator.NegSqDistSimilarity()
if (similarity_calculator == 'thresholded_iou_similarity'):
return region_similarity_calculator.ThresholdedIouSimilarity(region_similarity_calculator_config.thresholded_iou_similarity.iou_threshold)
raise ValueError('Unknown region similarity calculator.') | Builds region similarity calculator based on the configuration.
Builds one of [IouSimilarity, IoaSimilarity, NegSqDistSimilarity] objects. See
core/region_similarity_calculator.proto for details.
Args:
region_similarity_calculator_config: RegionSimilarityCalculator
configuration proto.
Returns:
region_similarity_calculator: RegionSimilarityCalculator object.
Raises:
ValueError: On unknown region similarity calculator. | research/object_detection/builders/region_similarity_calculator_builder.py | build | volkerstampa/models | 82,518 | python | def build(region_similarity_calculator_config):
'Builds region similarity calculator based on the configuration.\n\n Builds one of [IouSimilarity, IoaSimilarity, NegSqDistSimilarity] objects. See\n core/region_similarity_calculator.proto for details.\n\n Args:\n region_similarity_calculator_config: RegionSimilarityCalculator\n configuration proto.\n\n Returns:\n region_similarity_calculator: RegionSimilarityCalculator object.\n\n Raises:\n ValueError: On unknown region similarity calculator.\n '
if (not isinstance(region_similarity_calculator_config, region_similarity_calculator_pb2.RegionSimilarityCalculator)):
raise ValueError('region_similarity_calculator_config not of type region_similarity_calculator_pb2.RegionsSimilarityCalculator')
similarity_calculator = region_similarity_calculator_config.WhichOneof('region_similarity')
if (similarity_calculator == 'iou_similarity'):
return region_similarity_calculator.IouSimilarity()
if (similarity_calculator == 'ioa_similarity'):
return region_similarity_calculator.IoaSimilarity()
if (similarity_calculator == 'neg_sq_dist_similarity'):
return region_similarity_calculator.NegSqDistSimilarity()
if (similarity_calculator == 'thresholded_iou_similarity'):
return region_similarity_calculator.ThresholdedIouSimilarity(region_similarity_calculator_config.thresholded_iou_similarity.iou_threshold)
raise ValueError('Unknown region similarity calculator.') | def build(region_similarity_calculator_config):
'Builds region similarity calculator based on the configuration.\n\n Builds one of [IouSimilarity, IoaSimilarity, NegSqDistSimilarity] objects. See\n core/region_similarity_calculator.proto for details.\n\n Args:\n region_similarity_calculator_config: RegionSimilarityCalculator\n configuration proto.\n\n Returns:\n region_similarity_calculator: RegionSimilarityCalculator object.\n\n Raises:\n ValueError: On unknown region similarity calculator.\n '
if (not isinstance(region_similarity_calculator_config, region_similarity_calculator_pb2.RegionSimilarityCalculator)):
raise ValueError('region_similarity_calculator_config not of type region_similarity_calculator_pb2.RegionsSimilarityCalculator')
similarity_calculator = region_similarity_calculator_config.WhichOneof('region_similarity')
if (similarity_calculator == 'iou_similarity'):
return region_similarity_calculator.IouSimilarity()
if (similarity_calculator == 'ioa_similarity'):
return region_similarity_calculator.IoaSimilarity()
if (similarity_calculator == 'neg_sq_dist_similarity'):
return region_similarity_calculator.NegSqDistSimilarity()
if (similarity_calculator == 'thresholded_iou_similarity'):
return region_similarity_calculator.ThresholdedIouSimilarity(region_similarity_calculator_config.thresholded_iou_similarity.iou_threshold)
raise ValueError('Unknown region similarity calculator.')<|docstring|>Builds region similarity calculator based on the configuration.
Builds one of [IouSimilarity, IoaSimilarity, NegSqDistSimilarity] objects. See
core/region_similarity_calculator.proto for details.
Args:
region_similarity_calculator_config: RegionSimilarityCalculator
configuration proto.
Returns:
region_similarity_calculator: RegionSimilarityCalculator object.
Raises:
ValueError: On unknown region similarity calculator.<|endoftext|> |
3258286b360e5e5efb06cb779d6001950280ae0105369d9bb34c715e02f440d7 | def compute_multiscalar_profile(gdf, segregation_index=None, groups=None, group_pop_var=None, total_pop_var=None, distances=None, network=None, decay='linear', function='triangular', precompute=True):
"Compute multiscalar segregation profile.\n\n This function calculates several Spatial Information Theory indices with\n increasing distance parameters.\n\n Parameters\n ----------\n gdf : geopandas.GeoDataFrame\n geodataframe with rows as observations and columns as population\n variables. Note that if using a network distance, the coordinate\n system for this gdf should be 4326. If using euclidian distance,\n this must be projected into planar coordinates like state plane or UTM.\n segregation_index : SpatialImplicit SegregationIndex Class\n a class from the library such as MultiInformationTheory, or MinMax\n groups : list\n list of population groups for calculating multigroup indices\n group_pop_var : str\n name of population group on gdf for calculating single group indices\n total_pop_var : str\n bame of total population on gdf for calculating single group indices\n distances : list\n list of floats representing bandwidth distances that define a local\n environment.\n network : pandana.Network (optional)\n A pandana.Network likely created with\n `segregation.network.get_osm_network`.\n decay : str (optional)\n decay type to be used in pandana accessibility calculation \n options are {'linear', 'exp', 'flat'}. The default is 'linear'.\n function: 'str' (optional)\n which weighting function should be passed to libpysal.weights.Kernel\n must be one of: 'triangular','uniform','quadratic','quartic','gaussian'\n precompute: bool\n Whether the pandana.Network instance should precompute the range\n queries. This is True by default\n index_type : str options: {single_group, multi_group}\n Whether the index is a single-group or -multigroup index\n\n\n Returns\n -------\n pandas.Series\n Series with distances as index and index statistics as values\n\n Notes\n -----\n Based on Sean F. Reardon, Stephen A. Matthews, David O’Sullivan, Barrett A. Lee, Glenn Firebaugh, Chad R. Farrell, & Kendra Bischoff. (2008). The Geographic Scale of Metropolitan Racial Segregation. Demography, 45(3), 489–514. https://doi.org/10.1353/dem.0.0019.\n\n Reference: :cite:`Reardon2008`.\n\n "
if (not segregation_index):
raise ValueError('You must pass a segregation SpatialImplicit Index Class')
gdf = gdf.copy()
indices = {}
if groups:
gdf[groups] = gdf[groups].astype(float)
indices[0] = segregation_index(gdf, groups=groups).statistic
elif group_pop_var:
indices[0] = segregation_index(gdf, group_pop_var=group_pop_var, total_pop_var=total_pop_var).statistic
with warnings.catch_warnings():
warnings.simplefilter('ignore')
if network:
if (not (gdf.crs.name == 'WGS 84')):
gdf = gdf.to_crs(epsg=4326)
if precompute:
maxdist = max(distances)
network.precompute(maxdist)
for distance in distances:
distance = np.float(distance)
if group_pop_var:
idx = segregation_index(gdf, group_pop_var=group_pop_var, total_pop_var=total_pop_var, network=network, decay=decay, distance=distance, precompute=False)
elif groups:
idx = segregation_index(gdf, groups=groups, network=network, decay=decay, distance=distance, precompute=False)
indices[distance] = idx.statistic
else:
for distance in distances:
w = Kernel.from_dataframe(gdf, bandwidth=distance, function=function)
if group_pop_var:
idx = segregation_index(gdf, group_pop_var=group_pop_var, total_pop_var=total_pop_var, w=w)
else:
idx = segregation_index(gdf, groups, w=w)
indices[distance] = idx.statistic
series = pd.Series(indices, name=str(type(idx)).split('.')[(- 1)][:(- 2)])
series.index.name = 'distance'
return series | Compute multiscalar segregation profile.
This function calculates several Spatial Information Theory indices with
increasing distance parameters.
Parameters
----------
gdf : geopandas.GeoDataFrame
geodataframe with rows as observations and columns as population
variables. Note that if using a network distance, the coordinate
system for this gdf should be 4326. If using euclidian distance,
this must be projected into planar coordinates like state plane or UTM.
segregation_index : SpatialImplicit SegregationIndex Class
a class from the library such as MultiInformationTheory, or MinMax
groups : list
list of population groups for calculating multigroup indices
group_pop_var : str
name of population group on gdf for calculating single group indices
total_pop_var : str
bame of total population on gdf for calculating single group indices
distances : list
list of floats representing bandwidth distances that define a local
environment.
network : pandana.Network (optional)
A pandana.Network likely created with
`segregation.network.get_osm_network`.
decay : str (optional)
decay type to be used in pandana accessibility calculation
options are {'linear', 'exp', 'flat'}. The default is 'linear'.
function: 'str' (optional)
which weighting function should be passed to libpysal.weights.Kernel
must be one of: 'triangular','uniform','quadratic','quartic','gaussian'
precompute: bool
Whether the pandana.Network instance should precompute the range
queries. This is True by default
index_type : str options: {single_group, multi_group}
Whether the index is a single-group or -multigroup index
Returns
-------
pandas.Series
Series with distances as index and index statistics as values
Notes
-----
Based on Sean F. Reardon, Stephen A. Matthews, David O’Sullivan, Barrett A. Lee, Glenn Firebaugh, Chad R. Farrell, & Kendra Bischoff. (2008). The Geographic Scale of Metropolitan Racial Segregation. Demography, 45(3), 489–514. https://doi.org/10.1353/dem.0.0019.
Reference: :cite:`Reardon2008`. | segregation/dynamics/segregation_profile.py | compute_multiscalar_profile | noahbouchier/segregation | 92 | python | def compute_multiscalar_profile(gdf, segregation_index=None, groups=None, group_pop_var=None, total_pop_var=None, distances=None, network=None, decay='linear', function='triangular', precompute=True):
"Compute multiscalar segregation profile.\n\n This function calculates several Spatial Information Theory indices with\n increasing distance parameters.\n\n Parameters\n ----------\n gdf : geopandas.GeoDataFrame\n geodataframe with rows as observations and columns as population\n variables. Note that if using a network distance, the coordinate\n system for this gdf should be 4326. If using euclidian distance,\n this must be projected into planar coordinates like state plane or UTM.\n segregation_index : SpatialImplicit SegregationIndex Class\n a class from the library such as MultiInformationTheory, or MinMax\n groups : list\n list of population groups for calculating multigroup indices\n group_pop_var : str\n name of population group on gdf for calculating single group indices\n total_pop_var : str\n bame of total population on gdf for calculating single group indices\n distances : list\n list of floats representing bandwidth distances that define a local\n environment.\n network : pandana.Network (optional)\n A pandana.Network likely created with\n `segregation.network.get_osm_network`.\n decay : str (optional)\n decay type to be used in pandana accessibility calculation \n options are {'linear', 'exp', 'flat'}. The default is 'linear'.\n function: 'str' (optional)\n which weighting function should be passed to libpysal.weights.Kernel\n must be one of: 'triangular','uniform','quadratic','quartic','gaussian'\n precompute: bool\n Whether the pandana.Network instance should precompute the range\n queries. This is True by default\n index_type : str options: {single_group, multi_group}\n Whether the index is a single-group or -multigroup index\n\n\n Returns\n -------\n pandas.Series\n Series with distances as index and index statistics as values\n\n Notes\n -----\n Based on Sean F. Reardon, Stephen A. Matthews, David O’Sullivan, Barrett A. Lee, Glenn Firebaugh, Chad R. Farrell, & Kendra Bischoff. (2008). The Geographic Scale of Metropolitan Racial Segregation. Demography, 45(3), 489–514. https://doi.org/10.1353/dem.0.0019.\n\n Reference: :cite:`Reardon2008`.\n\n "
if (not segregation_index):
raise ValueError('You must pass a segregation SpatialImplicit Index Class')
gdf = gdf.copy()
indices = {}
if groups:
gdf[groups] = gdf[groups].astype(float)
indices[0] = segregation_index(gdf, groups=groups).statistic
elif group_pop_var:
indices[0] = segregation_index(gdf, group_pop_var=group_pop_var, total_pop_var=total_pop_var).statistic
with warnings.catch_warnings():
warnings.simplefilter('ignore')
if network:
if (not (gdf.crs.name == 'WGS 84')):
gdf = gdf.to_crs(epsg=4326)
if precompute:
maxdist = max(distances)
network.precompute(maxdist)
for distance in distances:
distance = np.float(distance)
if group_pop_var:
idx = segregation_index(gdf, group_pop_var=group_pop_var, total_pop_var=total_pop_var, network=network, decay=decay, distance=distance, precompute=False)
elif groups:
idx = segregation_index(gdf, groups=groups, network=network, decay=decay, distance=distance, precompute=False)
indices[distance] = idx.statistic
else:
for distance in distances:
w = Kernel.from_dataframe(gdf, bandwidth=distance, function=function)
if group_pop_var:
idx = segregation_index(gdf, group_pop_var=group_pop_var, total_pop_var=total_pop_var, w=w)
else:
idx = segregation_index(gdf, groups, w=w)
indices[distance] = idx.statistic
series = pd.Series(indices, name=str(type(idx)).split('.')[(- 1)][:(- 2)])
series.index.name = 'distance'
return series | def compute_multiscalar_profile(gdf, segregation_index=None, groups=None, group_pop_var=None, total_pop_var=None, distances=None, network=None, decay='linear', function='triangular', precompute=True):
"Compute multiscalar segregation profile.\n\n This function calculates several Spatial Information Theory indices with\n increasing distance parameters.\n\n Parameters\n ----------\n gdf : geopandas.GeoDataFrame\n geodataframe with rows as observations and columns as population\n variables. Note that if using a network distance, the coordinate\n system for this gdf should be 4326. If using euclidian distance,\n this must be projected into planar coordinates like state plane or UTM.\n segregation_index : SpatialImplicit SegregationIndex Class\n a class from the library such as MultiInformationTheory, or MinMax\n groups : list\n list of population groups for calculating multigroup indices\n group_pop_var : str\n name of population group on gdf for calculating single group indices\n total_pop_var : str\n bame of total population on gdf for calculating single group indices\n distances : list\n list of floats representing bandwidth distances that define a local\n environment.\n network : pandana.Network (optional)\n A pandana.Network likely created with\n `segregation.network.get_osm_network`.\n decay : str (optional)\n decay type to be used in pandana accessibility calculation \n options are {'linear', 'exp', 'flat'}. The default is 'linear'.\n function: 'str' (optional)\n which weighting function should be passed to libpysal.weights.Kernel\n must be one of: 'triangular','uniform','quadratic','quartic','gaussian'\n precompute: bool\n Whether the pandana.Network instance should precompute the range\n queries. This is True by default\n index_type : str options: {single_group, multi_group}\n Whether the index is a single-group or -multigroup index\n\n\n Returns\n -------\n pandas.Series\n Series with distances as index and index statistics as values\n\n Notes\n -----\n Based on Sean F. Reardon, Stephen A. Matthews, David O’Sullivan, Barrett A. Lee, Glenn Firebaugh, Chad R. Farrell, & Kendra Bischoff. (2008). The Geographic Scale of Metropolitan Racial Segregation. Demography, 45(3), 489–514. https://doi.org/10.1353/dem.0.0019.\n\n Reference: :cite:`Reardon2008`.\n\n "
if (not segregation_index):
raise ValueError('You must pass a segregation SpatialImplicit Index Class')
gdf = gdf.copy()
indices = {}
if groups:
gdf[groups] = gdf[groups].astype(float)
indices[0] = segregation_index(gdf, groups=groups).statistic
elif group_pop_var:
indices[0] = segregation_index(gdf, group_pop_var=group_pop_var, total_pop_var=total_pop_var).statistic
with warnings.catch_warnings():
warnings.simplefilter('ignore')
if network:
if (not (gdf.crs.name == 'WGS 84')):
gdf = gdf.to_crs(epsg=4326)
if precompute:
maxdist = max(distances)
network.precompute(maxdist)
for distance in distances:
distance = np.float(distance)
if group_pop_var:
idx = segregation_index(gdf, group_pop_var=group_pop_var, total_pop_var=total_pop_var, network=network, decay=decay, distance=distance, precompute=False)
elif groups:
idx = segregation_index(gdf, groups=groups, network=network, decay=decay, distance=distance, precompute=False)
indices[distance] = idx.statistic
else:
for distance in distances:
w = Kernel.from_dataframe(gdf, bandwidth=distance, function=function)
if group_pop_var:
idx = segregation_index(gdf, group_pop_var=group_pop_var, total_pop_var=total_pop_var, w=w)
else:
idx = segregation_index(gdf, groups, w=w)
indices[distance] = idx.statistic
series = pd.Series(indices, name=str(type(idx)).split('.')[(- 1)][:(- 2)])
series.index.name = 'distance'
return series<|docstring|>Compute multiscalar segregation profile.
This function calculates several Spatial Information Theory indices with
increasing distance parameters.
Parameters
----------
gdf : geopandas.GeoDataFrame
geodataframe with rows as observations and columns as population
variables. Note that if using a network distance, the coordinate
system for this gdf should be 4326. If using euclidian distance,
this must be projected into planar coordinates like state plane or UTM.
segregation_index : SpatialImplicit SegregationIndex Class
a class from the library such as MultiInformationTheory, or MinMax
groups : list
list of population groups for calculating multigroup indices
group_pop_var : str
name of population group on gdf for calculating single group indices
total_pop_var : str
bame of total population on gdf for calculating single group indices
distances : list
list of floats representing bandwidth distances that define a local
environment.
network : pandana.Network (optional)
A pandana.Network likely created with
`segregation.network.get_osm_network`.
decay : str (optional)
decay type to be used in pandana accessibility calculation
options are {'linear', 'exp', 'flat'}. The default is 'linear'.
function: 'str' (optional)
which weighting function should be passed to libpysal.weights.Kernel
must be one of: 'triangular','uniform','quadratic','quartic','gaussian'
precompute: bool
Whether the pandana.Network instance should precompute the range
queries. This is True by default
index_type : str options: {single_group, multi_group}
Whether the index is a single-group or -multigroup index
Returns
-------
pandas.Series
Series with distances as index and index statistics as values
Notes
-----
Based on Sean F. Reardon, Stephen A. Matthews, David O’Sullivan, Barrett A. Lee, Glenn Firebaugh, Chad R. Farrell, & Kendra Bischoff. (2008). The Geographic Scale of Metropolitan Racial Segregation. Demography, 45(3), 489–514. https://doi.org/10.1353/dem.0.0019.
Reference: :cite:`Reardon2008`.<|endoftext|> |
822b1b1c22506136c3b90a109e7819053db5b6f52722c3e4c73b24d4d6640a1f | def mlir_opt(source: str, options: List[str], mlir_opt='mlir-opt'):
'\n Calls ``mlir-opt`` on *source* with *options* as additional arguments.\n\n :arg source: The code to be passed to mlir-opt.\n :arg options: An instance of :class:`list`.\n :return: Transformed *source* as emitted by ``mlir-opt``.\n '
assert ('-o' not in options)
with tempfile.NamedTemporaryFile(mode='w', suffix='.mlir') as fp:
fp.write(source)
fp.file.flush()
cmdline = ([mlir_opt, fp.name] + options)
(result, stdout, stderr) = call_capture_output(cmdline)
return stdout.decode() | Calls ``mlir-opt`` on *source* with *options* as additional arguments.
:arg source: The code to be passed to mlir-opt.
:arg options: An instance of :class:`list`.
:return: Transformed *source* as emitted by ``mlir-opt``. | mlir/run.py | mlir_opt | kaushikcfd/pymlir | 0 | python | def mlir_opt(source: str, options: List[str], mlir_opt='mlir-opt'):
'\n Calls ``mlir-opt`` on *source* with *options* as additional arguments.\n\n :arg source: The code to be passed to mlir-opt.\n :arg options: An instance of :class:`list`.\n :return: Transformed *source* as emitted by ``mlir-opt``.\n '
assert ('-o' not in options)
with tempfile.NamedTemporaryFile(mode='w', suffix='.mlir') as fp:
fp.write(source)
fp.file.flush()
cmdline = ([mlir_opt, fp.name] + options)
(result, stdout, stderr) = call_capture_output(cmdline)
return stdout.decode() | def mlir_opt(source: str, options: List[str], mlir_opt='mlir-opt'):
'\n Calls ``mlir-opt`` on *source* with *options* as additional arguments.\n\n :arg source: The code to be passed to mlir-opt.\n :arg options: An instance of :class:`list`.\n :return: Transformed *source* as emitted by ``mlir-opt``.\n '
assert ('-o' not in options)
with tempfile.NamedTemporaryFile(mode='w', suffix='.mlir') as fp:
fp.write(source)
fp.file.flush()
cmdline = ([mlir_opt, fp.name] + options)
(result, stdout, stderr) = call_capture_output(cmdline)
return stdout.decode()<|docstring|>Calls ``mlir-opt`` on *source* with *options* as additional arguments.
:arg source: The code to be passed to mlir-opt.
:arg options: An instance of :class:`list`.
:return: Transformed *source* as emitted by ``mlir-opt``.<|endoftext|> |
5c7a1eac248c2ed43362d53f70a1a44122d592239bbab4e7e1d7907a1e398a69 | def mlir_translate(source, options, mlir_translate='mlir-translate'):
'\n Calls ``mlir-translate`` on *source* with *options* as additional arguments.\n\n :arg source: The code to be passed to mlir-translate.\n :arg options: An instance of :class:`list`.\n :return: Transformed *source* as emitted by ``mlir-translate``.\n '
with tempfile.NamedTemporaryFile(mode='w', suffix='.mlir', delete=False) as fp:
fp.write(source)
fp.file.flush()
cmdline = ([mlir_translate, fp.name] + options)
(result, stdout, stderr) = call_capture_output(cmdline)
return stdout.decode() | Calls ``mlir-translate`` on *source* with *options* as additional arguments.
:arg source: The code to be passed to mlir-translate.
:arg options: An instance of :class:`list`.
:return: Transformed *source* as emitted by ``mlir-translate``. | mlir/run.py | mlir_translate | kaushikcfd/pymlir | 0 | python | def mlir_translate(source, options, mlir_translate='mlir-translate'):
'\n Calls ``mlir-translate`` on *source* with *options* as additional arguments.\n\n :arg source: The code to be passed to mlir-translate.\n :arg options: An instance of :class:`list`.\n :return: Transformed *source* as emitted by ``mlir-translate``.\n '
with tempfile.NamedTemporaryFile(mode='w', suffix='.mlir', delete=False) as fp:
fp.write(source)
fp.file.flush()
cmdline = ([mlir_translate, fp.name] + options)
(result, stdout, stderr) = call_capture_output(cmdline)
return stdout.decode() | def mlir_translate(source, options, mlir_translate='mlir-translate'):
'\n Calls ``mlir-translate`` on *source* with *options* as additional arguments.\n\n :arg source: The code to be passed to mlir-translate.\n :arg options: An instance of :class:`list`.\n :return: Transformed *source* as emitted by ``mlir-translate``.\n '
with tempfile.NamedTemporaryFile(mode='w', suffix='.mlir', delete=False) as fp:
fp.write(source)
fp.file.flush()
cmdline = ([mlir_translate, fp.name] + options)
(result, stdout, stderr) = call_capture_output(cmdline)
return stdout.decode()<|docstring|>Calls ``mlir-translate`` on *source* with *options* as additional arguments.
:arg source: The code to be passed to mlir-translate.
:arg options: An instance of :class:`list`.
:return: Transformed *source* as emitted by ``mlir-translate``.<|endoftext|> |
51a24ccb6f8bf90130c667ba69b6dd51373cb3c0399f2068012bd75795cd313d | def mlir_to_llvmir(source, debug=False):
'\n Converts MLIR *source* to LLVM IR. Invokes ``mlir-tranlate -mlir-to-llvmir``\n under the hood.\n '
if debug:
return mlir_translate(source, ['-mlir-to-llvmir', '-debugify-level=location+variables'])
else:
return mlir_translate(source, ['-mlir-to-llvmir']) | Converts MLIR *source* to LLVM IR. Invokes ``mlir-tranlate -mlir-to-llvmir``
under the hood. | mlir/run.py | mlir_to_llvmir | kaushikcfd/pymlir | 0 | python | def mlir_to_llvmir(source, debug=False):
'\n Converts MLIR *source* to LLVM IR. Invokes ``mlir-tranlate -mlir-to-llvmir``\n under the hood.\n '
if debug:
return mlir_translate(source, ['-mlir-to-llvmir', '-debugify-level=location+variables'])
else:
return mlir_translate(source, ['-mlir-to-llvmir']) | def mlir_to_llvmir(source, debug=False):
'\n Converts MLIR *source* to LLVM IR. Invokes ``mlir-tranlate -mlir-to-llvmir``\n under the hood.\n '
if debug:
return mlir_translate(source, ['-mlir-to-llvmir', '-debugify-level=location+variables'])
else:
return mlir_translate(source, ['-mlir-to-llvmir'])<|docstring|>Converts MLIR *source* to LLVM IR. Invokes ``mlir-tranlate -mlir-to-llvmir``
under the hood.<|endoftext|> |
1ba93b625c0b0618b20e786d5e20cee133f9f13ed6737bc1e63977fef9f06728 | def llvmir_to_obj(source, llc='llc'):
'\n Returns the compiled object code for the LLVM code *source*.\n '
with tempfile.NamedTemporaryFile(mode='w', suffix='.ll') as llfp:
llfp.write(source)
llfp.file.flush()
with tempfile.NamedTemporaryFile(suffix='.o', mode='rb') as objfp:
cmdline = [llc, llfp.name, '-o', objfp.name, '-filetype=obj']
(result, stdout, stderr) = call_capture_output(cmdline)
obj_code = objfp.read()
return obj_code | Returns the compiled object code for the LLVM code *source*. | mlir/run.py | llvmir_to_obj | kaushikcfd/pymlir | 0 | python | def llvmir_to_obj(source, llc='llc'):
'\n \n '
with tempfile.NamedTemporaryFile(mode='w', suffix='.ll') as llfp:
llfp.write(source)
llfp.file.flush()
with tempfile.NamedTemporaryFile(suffix='.o', mode='rb') as objfp:
cmdline = [llc, llfp.name, '-o', objfp.name, '-filetype=obj']
(result, stdout, stderr) = call_capture_output(cmdline)
obj_code = objfp.read()
return obj_code | def llvmir_to_obj(source, llc='llc'):
'\n \n '
with tempfile.NamedTemporaryFile(mode='w', suffix='.ll') as llfp:
llfp.write(source)
llfp.file.flush()
with tempfile.NamedTemporaryFile(suffix='.o', mode='rb') as objfp:
cmdline = [llc, llfp.name, '-o', objfp.name, '-filetype=obj']
(result, stdout, stderr) = call_capture_output(cmdline)
obj_code = objfp.read()
return obj_code<|docstring|>Returns the compiled object code for the LLVM code *source*.<|endoftext|> |
16adf704bb5311ebf6189d0f9818595852b83efb6c016d4c4439b6c6ad5fbcda | def call_function(source: str, fn_name: str, args: List[Any], argtypes: Optional[List[ctypes._SimpleCData]]=None):
'\n Calls the function *fn_name* in *source*.\n\n :arg source: The MLIR code whose function is to be called.\n :arg args: A list of args to be passed to the function. Each arg can have\n one of the following types:\n - :class:`numpy.ndarray`\n - :class:`numpy.number\n - :class:`Memref`\n :arg fn_name: Name of the function op which is the to be called\n '
source = mlir_opt(source, ['-convert-std-to-llvm=emit-c-wrappers'])
fn_name = f'_mlir_ciface_{fn_name}'
if (argtypes is None):
argtypes = guess_argtypes(args)
args = [preprocess_arg(arg) for arg in args]
obj_code = llvmir_to_obj(mlir_to_llvmir(source))
toolchain = guess_toolchain()
(_, mod_name, ext_file, recompiled) = compile_from_string(toolchain, fn_name, obj_code, ['module.o'], source_is_binary=True)
f = ctypes.CDLL(ext_file)
fn = getattr(f, fn_name)
fn.argtypes = argtypes
fn.restype = None
fn(*args) | Calls the function *fn_name* in *source*.
:arg source: The MLIR code whose function is to be called.
:arg args: A list of args to be passed to the function. Each arg can have
one of the following types:
- :class:`numpy.ndarray`
- :class:`numpy.number
- :class:`Memref`
:arg fn_name: Name of the function op which is the to be called | mlir/run.py | call_function | kaushikcfd/pymlir | 0 | python | def call_function(source: str, fn_name: str, args: List[Any], argtypes: Optional[List[ctypes._SimpleCData]]=None):
'\n Calls the function *fn_name* in *source*.\n\n :arg source: The MLIR code whose function is to be called.\n :arg args: A list of args to be passed to the function. Each arg can have\n one of the following types:\n - :class:`numpy.ndarray`\n - :class:`numpy.number\n - :class:`Memref`\n :arg fn_name: Name of the function op which is the to be called\n '
source = mlir_opt(source, ['-convert-std-to-llvm=emit-c-wrappers'])
fn_name = f'_mlir_ciface_{fn_name}'
if (argtypes is None):
argtypes = guess_argtypes(args)
args = [preprocess_arg(arg) for arg in args]
obj_code = llvmir_to_obj(mlir_to_llvmir(source))
toolchain = guess_toolchain()
(_, mod_name, ext_file, recompiled) = compile_from_string(toolchain, fn_name, obj_code, ['module.o'], source_is_binary=True)
f = ctypes.CDLL(ext_file)
fn = getattr(f, fn_name)
fn.argtypes = argtypes
fn.restype = None
fn(*args) | def call_function(source: str, fn_name: str, args: List[Any], argtypes: Optional[List[ctypes._SimpleCData]]=None):
'\n Calls the function *fn_name* in *source*.\n\n :arg source: The MLIR code whose function is to be called.\n :arg args: A list of args to be passed to the function. Each arg can have\n one of the following types:\n - :class:`numpy.ndarray`\n - :class:`numpy.number\n - :class:`Memref`\n :arg fn_name: Name of the function op which is the to be called\n '
source = mlir_opt(source, ['-convert-std-to-llvm=emit-c-wrappers'])
fn_name = f'_mlir_ciface_{fn_name}'
if (argtypes is None):
argtypes = guess_argtypes(args)
args = [preprocess_arg(arg) for arg in args]
obj_code = llvmir_to_obj(mlir_to_llvmir(source))
toolchain = guess_toolchain()
(_, mod_name, ext_file, recompiled) = compile_from_string(toolchain, fn_name, obj_code, ['module.o'], source_is_binary=True)
f = ctypes.CDLL(ext_file)
fn = getattr(f, fn_name)
fn.argtypes = argtypes
fn.restype = None
fn(*args)<|docstring|>Calls the function *fn_name* in *source*.
:arg source: The MLIR code whose function is to be called.
:arg args: A list of args to be passed to the function. Each arg can have
one of the following types:
- :class:`numpy.ndarray`
- :class:`numpy.number
- :class:`Memref`
:arg fn_name: Name of the function op which is the to be called<|endoftext|> |
689345f31b157bcf74d72311014d0a006946095a4af795930df6bb34716db476 | @staticmethod
def from_numpy(ary):
'\n Create a :class:`Memref` from a :class:`numpy.ndarray`\n '
shape = ary.shape
strides = tuple(((stride // ary.itemsize) for stride in ary.strides))
return Memref(ary.ctypes.data, shape, strides) | Create a :class:`Memref` from a :class:`numpy.ndarray` | mlir/run.py | from_numpy | kaushikcfd/pymlir | 0 | python | @staticmethod
def from_numpy(ary):
'\n \n '
shape = ary.shape
strides = tuple(((stride // ary.itemsize) for stride in ary.strides))
return Memref(ary.ctypes.data, shape, strides) | @staticmethod
def from_numpy(ary):
'\n \n '
shape = ary.shape
strides = tuple(((stride // ary.itemsize) for stride in ary.strides))
return Memref(ary.ctypes.data, shape, strides)<|docstring|>Create a :class:`Memref` from a :class:`numpy.ndarray`<|endoftext|> |
ce2a701a0b642bdb9762eb01e660b2b95d518150abd7d53db3a9a5e3389c29dd | def parse_knots(flags_knots: List[Text], flags_knots_connect: List[Text], flags_initial_a: List[Text]) -> Tuple[(List[int], List[int], List[np.float64])]:
"Parses knots arguments from flags to list of parameters.\n\n Before parsing, all input data are a list of strings. This function\n transforms them into the corresponding numerical values.\n\n Args:\n flags_knots: the knots in a piecewise linear infection rate model. Each\n element indicates the length of a piece.\n flags_knots_connect: indicates whether or not the end of a pieces is\n continous.\n flags_initial_a: vector that indicates the infection rate at each knot.\n\n Raises:\n ValueError: if the lengths of 'knots' and 'knots_connect' are unequal.\n\n Returns:\n parsed_knots: the knots in a piecewise linear infection rate model. Each\n element indicates the length of a piece.\n parsed_knots_connect: indicates whether or not the end of a pieces is\n continous.\n parsed_initial_a: vector that indicates the infection rate at each knot.\n\n "
parsed_knots = [int(k) for k in flags_knots]
if (flags_knots_connect is None):
parsed_knots_connect = ([1] * len(flags_knots))
else:
parsed_knots_connect = [int(k) for k in flags_knots_connect]
if (len(parsed_knots_connect) != len(parsed_knots)):
raise ValueError('The length of knots should be the same as the lengthof knots_connect.')
n_weights = ((len(parsed_knots) + 1) + sum((1 - np.array(parsed_knots_connect))))
if (flags_initial_a is None):
parsed_initial_a = ([0.5] * n_weights)
else:
parsed_initial_a = [np.float64(a) for a in flags_initial_a]
return (parsed_knots, parsed_knots_connect, parsed_initial_a) | Parses knots arguments from flags to list of parameters.
Before parsing, all input data are a list of strings. This function
transforms them into the corresponding numerical values.
Args:
flags_knots: the knots in a piecewise linear infection rate model. Each
element indicates the length of a piece.
flags_knots_connect: indicates whether or not the end of a pieces is
continous.
flags_initial_a: vector that indicates the infection rate at each knot.
Raises:
ValueError: if the lengths of 'knots' and 'knots_connect' are unequal.
Returns:
parsed_knots: the knots in a piecewise linear infection rate model. Each
element indicates the length of a piece.
parsed_knots_connect: indicates whether or not the end of a pieces is
continous.
parsed_initial_a: vector that indicates the infection rate at each knot. | python/code case/io_utils.py | parse_knots | COVID19BIOSTAT/covid19_prediction | 14 | python | def parse_knots(flags_knots: List[Text], flags_knots_connect: List[Text], flags_initial_a: List[Text]) -> Tuple[(List[int], List[int], List[np.float64])]:
"Parses knots arguments from flags to list of parameters.\n\n Before parsing, all input data are a list of strings. This function\n transforms them into the corresponding numerical values.\n\n Args:\n flags_knots: the knots in a piecewise linear infection rate model. Each\n element indicates the length of a piece.\n flags_knots_connect: indicates whether or not the end of a pieces is\n continous.\n flags_initial_a: vector that indicates the infection rate at each knot.\n\n Raises:\n ValueError: if the lengths of 'knots' and 'knots_connect' are unequal.\n\n Returns:\n parsed_knots: the knots in a piecewise linear infection rate model. Each\n element indicates the length of a piece.\n parsed_knots_connect: indicates whether or not the end of a pieces is\n continous.\n parsed_initial_a: vector that indicates the infection rate at each knot.\n\n "
parsed_knots = [int(k) for k in flags_knots]
if (flags_knots_connect is None):
parsed_knots_connect = ([1] * len(flags_knots))
else:
parsed_knots_connect = [int(k) for k in flags_knots_connect]
if (len(parsed_knots_connect) != len(parsed_knots)):
raise ValueError('The length of knots should be the same as the lengthof knots_connect.')
n_weights = ((len(parsed_knots) + 1) + sum((1 - np.array(parsed_knots_connect))))
if (flags_initial_a is None):
parsed_initial_a = ([0.5] * n_weights)
else:
parsed_initial_a = [np.float64(a) for a in flags_initial_a]
return (parsed_knots, parsed_knots_connect, parsed_initial_a) | def parse_knots(flags_knots: List[Text], flags_knots_connect: List[Text], flags_initial_a: List[Text]) -> Tuple[(List[int], List[int], List[np.float64])]:
"Parses knots arguments from flags to list of parameters.\n\n Before parsing, all input data are a list of strings. This function\n transforms them into the corresponding numerical values.\n\n Args:\n flags_knots: the knots in a piecewise linear infection rate model. Each\n element indicates the length of a piece.\n flags_knots_connect: indicates whether or not the end of a pieces is\n continous.\n flags_initial_a: vector that indicates the infection rate at each knot.\n\n Raises:\n ValueError: if the lengths of 'knots' and 'knots_connect' are unequal.\n\n Returns:\n parsed_knots: the knots in a piecewise linear infection rate model. Each\n element indicates the length of a piece.\n parsed_knots_connect: indicates whether or not the end of a pieces is\n continous.\n parsed_initial_a: vector that indicates the infection rate at each knot.\n\n "
parsed_knots = [int(k) for k in flags_knots]
if (flags_knots_connect is None):
parsed_knots_connect = ([1] * len(flags_knots))
else:
parsed_knots_connect = [int(k) for k in flags_knots_connect]
if (len(parsed_knots_connect) != len(parsed_knots)):
raise ValueError('The length of knots should be the same as the lengthof knots_connect.')
n_weights = ((len(parsed_knots) + 1) + sum((1 - np.array(parsed_knots_connect))))
if (flags_initial_a is None):
parsed_initial_a = ([0.5] * n_weights)
else:
parsed_initial_a = [np.float64(a) for a in flags_initial_a]
return (parsed_knots, parsed_knots_connect, parsed_initial_a)<|docstring|>Parses knots arguments from flags to list of parameters.
Before parsing, all input data are a list of strings. This function
transforms them into the corresponding numerical values.
Args:
flags_knots: the knots in a piecewise linear infection rate model. Each
element indicates the length of a piece.
flags_knots_connect: indicates whether or not the end of a pieces is
continous.
flags_initial_a: vector that indicates the infection rate at each knot.
Raises:
ValueError: if the lengths of 'knots' and 'knots_connect' are unequal.
Returns:
parsed_knots: the knots in a piecewise linear infection rate model. Each
element indicates the length of a piece.
parsed_knots_connect: indicates whether or not the end of a pieces is
continous.
parsed_initial_a: vector that indicates the infection rate at each knot.<|endoftext|> |
3219361faf613a46b8fe4c628d655961abeb251257caf191165f4547cd879429 | def parse_estimated_model(estimator: infection_model.Covid19InfectionsEstimator):
'Shows the optimal model estimation after training.\n\n Args:\n estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator\n or Covid19CombinedEstimator.\n\n '
print(f'The minimum loss in training = {estimator.final_loss}.')
print(f'The selected t0 = {estimator.final_model.t0}.')
estimated_weights = estimator.final_model.weights
for single_weight in estimated_weights:
full_name = single_weight.name
pattern = re.search(':(\\d+)', full_name)
short_name = (full_name[:pattern.start()] if pattern else full_name)
single_weight_arr = single_weight.numpy()
print(f'The estimated {short_name} = {single_weight_arr.flatten()}.') | Shows the optimal model estimation after training.
Args:
estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator
or Covid19CombinedEstimator. | python/code case/io_utils.py | parse_estimated_model | COVID19BIOSTAT/covid19_prediction | 14 | python | def parse_estimated_model(estimator: infection_model.Covid19InfectionsEstimator):
'Shows the optimal model estimation after training.\n\n Args:\n estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator\n or Covid19CombinedEstimator.\n\n '
print(f'The minimum loss in training = {estimator.final_loss}.')
print(f'The selected t0 = {estimator.final_model.t0}.')
estimated_weights = estimator.final_model.weights
for single_weight in estimated_weights:
full_name = single_weight.name
pattern = re.search(':(\\d+)', full_name)
short_name = (full_name[:pattern.start()] if pattern else full_name)
single_weight_arr = single_weight.numpy()
print(f'The estimated {short_name} = {single_weight_arr.flatten()}.') | def parse_estimated_model(estimator: infection_model.Covid19InfectionsEstimator):
'Shows the optimal model estimation after training.\n\n Args:\n estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator\n or Covid19CombinedEstimator.\n\n '
print(f'The minimum loss in training = {estimator.final_loss}.')
print(f'The selected t0 = {estimator.final_model.t0}.')
estimated_weights = estimator.final_model.weights
for single_weight in estimated_weights:
full_name = single_weight.name
pattern = re.search(':(\\d+)', full_name)
short_name = (full_name[:pattern.start()] if pattern else full_name)
single_weight_arr = single_weight.numpy()
print(f'The estimated {short_name} = {single_weight_arr.flatten()}.')<|docstring|>Shows the optimal model estimation after training.
Args:
estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator
or Covid19CombinedEstimator.<|endoftext|> |
cb9ab053a6865da4edfe050e14bd3979388797ae0145eca6173fc1c607f733c3 | def export_estimation_and_prediction(estimator: infection_model.Covid19InfectionsEstimator, test_duration: int, output_path: Text, suffix: Optional[Text]='', flatten_future: bool=False, to_json: bool=False):
'Exports estimated infection rates and prediction (infections, deaths).\n\n Args:\n estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator\n or Covid19CombinedEstimator.\n test_duration: specifies the number of days for prediction. The first day\n in the prediction should be aligned with the time of first observed case\n in training data.\n output_path: specifies the output directory for the predicted values and\n infection rate features.\n suffix: optionally passes a suffix to the output files.\n flatten_future: xxx.\n to_json: if true the export data are saved in a json file; otherwise each\n item in the export is saved in a separate npy file.\n\n '
if (not os.path.exists(output_path)):
os.makedirs(output_path)
infection_rate_features = estimator.get_infect_rate_features(test_duration, flatten_future)
export_data = dict(predicted_infection_rate=infection_rate_features[0].numpy(), predicted_reproduction_number=infection_rate_features[1].numpy(), predicted_daily_observed=estimator.predict(test_duration, True, flatten_future).numpy(), predicted_daily_infected=estimator.predict(test_duration, False, flatten_future).numpy(), predicted_daily_death=estimator.predict_death(test_duration, flatten_future).numpy(), best_weights=estimator.final_model.get_weights(), best_t0=estimator.final_model.t0)
if to_json:
dumped = json.dumps(export_data, cls=NumpyEncoder)
with open(os.path.join(output_path, f'export{suffix}.json'), 'w') as f:
json.dump(dumped, f)
else:
for (key, value) in export_data.items():
np.save(file=os.path.join(output_path, (key + suffix)), arr=value) | Exports estimated infection rates and prediction (infections, deaths).
Args:
estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator
or Covid19CombinedEstimator.
test_duration: specifies the number of days for prediction. The first day
in the prediction should be aligned with the time of first observed case
in training data.
output_path: specifies the output directory for the predicted values and
infection rate features.
suffix: optionally passes a suffix to the output files.
flatten_future: xxx.
to_json: if true the export data are saved in a json file; otherwise each
item in the export is saved in a separate npy file. | python/code case/io_utils.py | export_estimation_and_prediction | COVID19BIOSTAT/covid19_prediction | 14 | python | def export_estimation_and_prediction(estimator: infection_model.Covid19InfectionsEstimator, test_duration: int, output_path: Text, suffix: Optional[Text]=, flatten_future: bool=False, to_json: bool=False):
'Exports estimated infection rates and prediction (infections, deaths).\n\n Args:\n estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator\n or Covid19CombinedEstimator.\n test_duration: specifies the number of days for prediction. The first day\n in the prediction should be aligned with the time of first observed case\n in training data.\n output_path: specifies the output directory for the predicted values and\n infection rate features.\n suffix: optionally passes a suffix to the output files.\n flatten_future: xxx.\n to_json: if true the export data are saved in a json file; otherwise each\n item in the export is saved in a separate npy file.\n\n '
if (not os.path.exists(output_path)):
os.makedirs(output_path)
infection_rate_features = estimator.get_infect_rate_features(test_duration, flatten_future)
export_data = dict(predicted_infection_rate=infection_rate_features[0].numpy(), predicted_reproduction_number=infection_rate_features[1].numpy(), predicted_daily_observed=estimator.predict(test_duration, True, flatten_future).numpy(), predicted_daily_infected=estimator.predict(test_duration, False, flatten_future).numpy(), predicted_daily_death=estimator.predict_death(test_duration, flatten_future).numpy(), best_weights=estimator.final_model.get_weights(), best_t0=estimator.final_model.t0)
if to_json:
dumped = json.dumps(export_data, cls=NumpyEncoder)
with open(os.path.join(output_path, f'export{suffix}.json'), 'w') as f:
json.dump(dumped, f)
else:
for (key, value) in export_data.items():
np.save(file=os.path.join(output_path, (key + suffix)), arr=value) | def export_estimation_and_prediction(estimator: infection_model.Covid19InfectionsEstimator, test_duration: int, output_path: Text, suffix: Optional[Text]=, flatten_future: bool=False, to_json: bool=False):
'Exports estimated infection rates and prediction (infections, deaths).\n\n Args:\n estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator\n or Covid19CombinedEstimator.\n test_duration: specifies the number of days for prediction. The first day\n in the prediction should be aligned with the time of first observed case\n in training data.\n output_path: specifies the output directory for the predicted values and\n infection rate features.\n suffix: optionally passes a suffix to the output files.\n flatten_future: xxx.\n to_json: if true the export data are saved in a json file; otherwise each\n item in the export is saved in a separate npy file.\n\n '
if (not os.path.exists(output_path)):
os.makedirs(output_path)
infection_rate_features = estimator.get_infect_rate_features(test_duration, flatten_future)
export_data = dict(predicted_infection_rate=infection_rate_features[0].numpy(), predicted_reproduction_number=infection_rate_features[1].numpy(), predicted_daily_observed=estimator.predict(test_duration, True, flatten_future).numpy(), predicted_daily_infected=estimator.predict(test_duration, False, flatten_future).numpy(), predicted_daily_death=estimator.predict_death(test_duration, flatten_future).numpy(), best_weights=estimator.final_model.get_weights(), best_t0=estimator.final_model.t0)
if to_json:
dumped = json.dumps(export_data, cls=NumpyEncoder)
with open(os.path.join(output_path, f'export{suffix}.json'), 'w') as f:
json.dump(dumped, f)
else:
for (key, value) in export_data.items():
np.save(file=os.path.join(output_path, (key + suffix)), arr=value)<|docstring|>Exports estimated infection rates and prediction (infections, deaths).
Args:
estimator: an instance of Covid19InfectionsEstimator, Covid19DeathEstimator
or Covid19CombinedEstimator.
test_duration: specifies the number of days for prediction. The first day
in the prediction should be aligned with the time of first observed case
in training data.
output_path: specifies the output directory for the predicted values and
infection rate features.
suffix: optionally passes a suffix to the output files.
flatten_future: xxx.
to_json: if true the export data are saved in a json file; otherwise each
item in the export is saved in a separate npy file.<|endoftext|> |
6c6bef14e5db0afc013e1ba85af0efa28556e592e121ba2236c37770630a402e | def parse_json_export(fpath: Text) -> Dict[(Text, Any)]:
'Parses export data from a json file into a dict of arrays.'
with open(fpath, 'r') as f:
jdata = json.load(f)
return json.loads(jdata) | Parses export data from a json file into a dict of arrays. | python/code case/io_utils.py | parse_json_export | COVID19BIOSTAT/covid19_prediction | 14 | python | def parse_json_export(fpath: Text) -> Dict[(Text, Any)]:
with open(fpath, 'r') as f:
jdata = json.load(f)
return json.loads(jdata) | def parse_json_export(fpath: Text) -> Dict[(Text, Any)]:
with open(fpath, 'r') as f:
jdata = json.load(f)
return json.loads(jdata)<|docstring|>Parses export data from a json file into a dict of arrays.<|endoftext|> |
53c7cd0aa1720abc2e5a460fd42b4eb63939904247888a3775469a15abf05645 | def default(self, obj):
'Required to encode numpy arrays.'
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
return json.JSONEncoder.default(self, obj) | Required to encode numpy arrays. | python/code case/io_utils.py | default | COVID19BIOSTAT/covid19_prediction | 14 | python | def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
return json.JSONEncoder.default(self, obj) | def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
return json.JSONEncoder.default(self, obj)<|docstring|>Required to encode numpy arrays.<|endoftext|> |
1b58ab1d6da735d5c5066a2690885632bf125d7f4b0a6edce51490a67db8caa1 | def sim_distill_loss(self, s_feats_list, t_feats_list):
'\n compute similarity distillation loss\n :param score_list:\n :param mimic(list): [teacher, student]\n :return:\n '
loss = 0
for (s_feats, t_feats) in zip(s_feats_list, t_feats_list):
s_similarity = torch.mm(s_feats, s_feats.transpose(0, 1))
s_similarity = F.normalize(s_similarity, p=2, dim=1)
t_similarity = torch.mm(t_feats, t_feats.transpose(0, 1)).detach()
t_similarity = F.normalize(t_similarity, p=2, dim=1)
loss += (s_similarity - t_similarity).pow(2).mean()
return loss | compute similarity distillation loss
:param score_list:
:param mimic(list): [teacher, student]
:return: | lightreid/losses/self_distill_loss.py | sim_distill_loss | AsyaPes/light-reid-master | 296 | python | def sim_distill_loss(self, s_feats_list, t_feats_list):
'\n compute similarity distillation loss\n :param score_list:\n :param mimic(list): [teacher, student]\n :return:\n '
loss = 0
for (s_feats, t_feats) in zip(s_feats_list, t_feats_list):
s_similarity = torch.mm(s_feats, s_feats.transpose(0, 1))
s_similarity = F.normalize(s_similarity, p=2, dim=1)
t_similarity = torch.mm(t_feats, t_feats.transpose(0, 1)).detach()
t_similarity = F.normalize(t_similarity, p=2, dim=1)
loss += (s_similarity - t_similarity).pow(2).mean()
return loss | def sim_distill_loss(self, s_feats_list, t_feats_list):
'\n compute similarity distillation loss\n :param score_list:\n :param mimic(list): [teacher, student]\n :return:\n '
loss = 0
for (s_feats, t_feats) in zip(s_feats_list, t_feats_list):
s_similarity = torch.mm(s_feats, s_feats.transpose(0, 1))
s_similarity = F.normalize(s_similarity, p=2, dim=1)
t_similarity = torch.mm(t_feats, t_feats.transpose(0, 1)).detach()
t_similarity = F.normalize(t_similarity, p=2, dim=1)
loss += (s_similarity - t_similarity).pow(2).mean()
return loss<|docstring|>compute similarity distillation loss
:param score_list:
:param mimic(list): [teacher, student]
:return:<|endoftext|> |
80dd0b585c84e0fc8976e008e509b4a5daf49ab1a3042cf0c4518470fca09e5a | def kl_div_loss(logits_s, logits_t, mini=1e-08):
'\n :param logits_s: student score\n :param logits_t: teacher score as target\n :param mini: for number stable\n :return:\n '
logits_t = logits_t.detach()
prob1 = F.softmax(logits_s, dim=1)
prob2 = F.softmax(logits_t, dim=1)
loss = (torch.sum((prob2 * torch.log((mini + (prob2 / (prob1 + mini))))), 1) + torch.sum((prob1 * torch.log((mini + (prob1 / (prob2 + mini))))), 1))
return loss.mean() | :param logits_s: student score
:param logits_t: teacher score as target
:param mini: for number stable
:return: | lightreid/losses/self_distill_loss.py | kl_div_loss | AsyaPes/light-reid-master | 296 | python | def kl_div_loss(logits_s, logits_t, mini=1e-08):
'\n :param logits_s: student score\n :param logits_t: teacher score as target\n :param mini: for number stable\n :return:\n '
logits_t = logits_t.detach()
prob1 = F.softmax(logits_s, dim=1)
prob2 = F.softmax(logits_t, dim=1)
loss = (torch.sum((prob2 * torch.log((mini + (prob2 / (prob1 + mini))))), 1) + torch.sum((prob1 * torch.log((mini + (prob1 / (prob2 + mini))))), 1))
return loss.mean() | def kl_div_loss(logits_s, logits_t, mini=1e-08):
'\n :param logits_s: student score\n :param logits_t: teacher score as target\n :param mini: for number stable\n :return:\n '
logits_t = logits_t.detach()
prob1 = F.softmax(logits_s, dim=1)
prob2 = F.softmax(logits_t, dim=1)
loss = (torch.sum((prob2 * torch.log((mini + (prob2 / (prob1 + mini))))), 1) + torch.sum((prob1 * torch.log((mini + (prob1 / (prob2 + mini))))), 1))
return loss.mean()<|docstring|>:param logits_s: student score
:param logits_t: teacher score as target
:param mini: for number stable
:return:<|endoftext|> |
6a1f60b23b2cd2f3fe41a169543ebc7beefae075730d0d28b9bf963ea5f7b1a0 | def get_class(model_name):
'Returns the model class with the specified name.\n\n :param model_name: the name of the class\n :returns: the class with the specified name\n :raises: Exception if there is no class associated with the name\n '
for model in Base.__subclasses__():
if (model.__name__ == model_name):
return model
raise exception.IronicException((_('Cannot find model with name: %s') % model_name)) | Returns the model class with the specified name.
:param model_name: the name of the class
:returns: the class with the specified name
:raises: Exception if there is no class associated with the name | ironic/db/sqlalchemy/models.py | get_class | dangervon/ironic | 350 | python | def get_class(model_name):
'Returns the model class with the specified name.\n\n :param model_name: the name of the class\n :returns: the class with the specified name\n :raises: Exception if there is no class associated with the name\n '
for model in Base.__subclasses__():
if (model.__name__ == model_name):
return model
raise exception.IronicException((_('Cannot find model with name: %s') % model_name)) | def get_class(model_name):
'Returns the model class with the specified name.\n\n :param model_name: the name of the class\n :returns: the class with the specified name\n :raises: Exception if there is no class associated with the name\n '
for model in Base.__subclasses__():
if (model.__name__ == model_name):
return model
raise exception.IronicException((_('Cannot find model with name: %s') % model_name))<|docstring|>Returns the model class with the specified name.
:param model_name: the name of the class
:returns: the class with the specified name
:raises: Exception if there is no class associated with the name<|endoftext|> |
3f79b61bd6456de4ac4da073cbdb95d8d6cfeca89b8e959f30e7599031ddefa5 | def list_datasets():
'Returns the list of available FiftyOne datasets.\n\n Returns:\n a list of :class:`Dataset` names\n '
return sorted(foo.DatasetDocument.objects.distinct('name')) | Returns the list of available FiftyOne datasets.
Returns:
a list of :class:`Dataset` names | fiftyone/core/dataset.py | list_datasets | dadounhind/fiftyone | 1 | python | def list_datasets():
'Returns the list of available FiftyOne datasets.\n\n Returns:\n a list of :class:`Dataset` names\n '
return sorted(foo.DatasetDocument.objects.distinct('name')) | def list_datasets():
'Returns the list of available FiftyOne datasets.\n\n Returns:\n a list of :class:`Dataset` names\n '
return sorted(foo.DatasetDocument.objects.distinct('name'))<|docstring|>Returns the list of available FiftyOne datasets.
Returns:
a list of :class:`Dataset` names<|endoftext|> |
7c44527294ae94fe0ec4ca5c497d6b80117138eb5037aa44932b5ac558049e8a | def dataset_exists(name):
'Checks if the dataset exists.\n\n Args:\n name: the name of the dataset\n\n Returns:\n True/False\n '
try:
foo.DatasetDocument.objects.get(name=name)
return True
except moe.DoesNotExist:
return False | Checks if the dataset exists.
Args:
name: the name of the dataset
Returns:
True/False | fiftyone/core/dataset.py | dataset_exists | dadounhind/fiftyone | 1 | python | def dataset_exists(name):
'Checks if the dataset exists.\n\n Args:\n name: the name of the dataset\n\n Returns:\n True/False\n '
try:
foo.DatasetDocument.objects.get(name=name)
return True
except moe.DoesNotExist:
return False | def dataset_exists(name):
'Checks if the dataset exists.\n\n Args:\n name: the name of the dataset\n\n Returns:\n True/False\n '
try:
foo.DatasetDocument.objects.get(name=name)
return True
except moe.DoesNotExist:
return False<|docstring|>Checks if the dataset exists.
Args:
name: the name of the dataset
Returns:
True/False<|endoftext|> |
efc964e5bf3793b06d632944e18aa1365b9d3d2a4c1e7fd78cd4bea972c2e1d5 | def load_dataset(name):
'Loads the FiftyOne dataset with the given name.\n\n Note that :class:`Dataset` instances are singletons keyed by ``name``, so\n all calls to this function with a given dataset ``name`` in a program will\n return the same object.\n\n To create a new dataset, use the :class:`Dataset` constructor.\n\n Args:\n name: the name of the dataset\n\n Returns:\n a :class:`Dataset`\n\n Raises:\n ValueError: if no dataset exists with the given name\n '
return Dataset(name, _create=False) | Loads the FiftyOne dataset with the given name.
Note that :class:`Dataset` instances are singletons keyed by ``name``, so
all calls to this function with a given dataset ``name`` in a program will
return the same object.
To create a new dataset, use the :class:`Dataset` constructor.
Args:
name: the name of the dataset
Returns:
a :class:`Dataset`
Raises:
ValueError: if no dataset exists with the given name | fiftyone/core/dataset.py | load_dataset | dadounhind/fiftyone | 1 | python | def load_dataset(name):
'Loads the FiftyOne dataset with the given name.\n\n Note that :class:`Dataset` instances are singletons keyed by ``name``, so\n all calls to this function with a given dataset ``name`` in a program will\n return the same object.\n\n To create a new dataset, use the :class:`Dataset` constructor.\n\n Args:\n name: the name of the dataset\n\n Returns:\n a :class:`Dataset`\n\n Raises:\n ValueError: if no dataset exists with the given name\n '
return Dataset(name, _create=False) | def load_dataset(name):
'Loads the FiftyOne dataset with the given name.\n\n Note that :class:`Dataset` instances are singletons keyed by ``name``, so\n all calls to this function with a given dataset ``name`` in a program will\n return the same object.\n\n To create a new dataset, use the :class:`Dataset` constructor.\n\n Args:\n name: the name of the dataset\n\n Returns:\n a :class:`Dataset`\n\n Raises:\n ValueError: if no dataset exists with the given name\n '
return Dataset(name, _create=False)<|docstring|>Loads the FiftyOne dataset with the given name.
Note that :class:`Dataset` instances are singletons keyed by ``name``, so
all calls to this function with a given dataset ``name`` in a program will
return the same object.
To create a new dataset, use the :class:`Dataset` constructor.
Args:
name: the name of the dataset
Returns:
a :class:`Dataset`
Raises:
ValueError: if no dataset exists with the given name<|endoftext|> |
ac88a92cd2b5f19216937afe552e2980927286a86a50c0d388e285bf3d501e3f | def get_default_dataset_name():
'Returns a default dataset name based on the current time.\n\n Returns:\n a dataset name\n '
now = datetime.datetime.now()
name = now.strftime('%Y.%m.%d.%H.%M.%S')
if (name in list_datasets()):
name = now.strftime('%Y.%m.%d.%H.%M.%S.%f')
return name | Returns a default dataset name based on the current time.
Returns:
a dataset name | fiftyone/core/dataset.py | get_default_dataset_name | dadounhind/fiftyone | 1 | python | def get_default_dataset_name():
'Returns a default dataset name based on the current time.\n\n Returns:\n a dataset name\n '
now = datetime.datetime.now()
name = now.strftime('%Y.%m.%d.%H.%M.%S')
if (name in list_datasets()):
name = now.strftime('%Y.%m.%d.%H.%M.%S.%f')
return name | def get_default_dataset_name():
'Returns a default dataset name based on the current time.\n\n Returns:\n a dataset name\n '
now = datetime.datetime.now()
name = now.strftime('%Y.%m.%d.%H.%M.%S')
if (name in list_datasets()):
name = now.strftime('%Y.%m.%d.%H.%M.%S.%f')
return name<|docstring|>Returns a default dataset name based on the current time.
Returns:
a dataset name<|endoftext|> |
8ebbd282fbe33f67165fcf52fd065cc94553039de664dedb88e43a5dfc6fefdb | def make_unique_dataset_name(root):
'Makes a unique dataset name with the given root name.\n\n Args:\n root: the root name for the dataset\n\n Returns:\n the dataset name\n '
name = root
dataset_names = list_datasets()
if (name in dataset_names):
name += ('_' + _get_random_characters(6))
while (name in dataset_names):
name += _get_random_characters(1)
return name | Makes a unique dataset name with the given root name.
Args:
root: the root name for the dataset
Returns:
the dataset name | fiftyone/core/dataset.py | make_unique_dataset_name | dadounhind/fiftyone | 1 | python | def make_unique_dataset_name(root):
'Makes a unique dataset name with the given root name.\n\n Args:\n root: the root name for the dataset\n\n Returns:\n the dataset name\n '
name = root
dataset_names = list_datasets()
if (name in dataset_names):
name += ('_' + _get_random_characters(6))
while (name in dataset_names):
name += _get_random_characters(1)
return name | def make_unique_dataset_name(root):
'Makes a unique dataset name with the given root name.\n\n Args:\n root: the root name for the dataset\n\n Returns:\n the dataset name\n '
name = root
dataset_names = list_datasets()
if (name in dataset_names):
name += ('_' + _get_random_characters(6))
while (name in dataset_names):
name += _get_random_characters(1)
return name<|docstring|>Makes a unique dataset name with the given root name.
Args:
root: the root name for the dataset
Returns:
the dataset name<|endoftext|> |
d06c29f25bd8e41c62dfb30fb055f51fc20244cc605c71f78edca98d5be3a936 | def get_default_dataset_dir(name):
'Returns the default dataset directory for the dataset with the given\n name.\n\n Args:\n name: the dataset name\n\n Returns:\n the default directory for the dataset\n '
return os.path.join(fo.config.default_dataset_dir, name) | Returns the default dataset directory for the dataset with the given
name.
Args:
name: the dataset name
Returns:
the default directory for the dataset | fiftyone/core/dataset.py | get_default_dataset_dir | dadounhind/fiftyone | 1 | python | def get_default_dataset_dir(name):
'Returns the default dataset directory for the dataset with the given\n name.\n\n Args:\n name: the dataset name\n\n Returns:\n the default directory for the dataset\n '
return os.path.join(fo.config.default_dataset_dir, name) | def get_default_dataset_dir(name):
'Returns the default dataset directory for the dataset with the given\n name.\n\n Args:\n name: the dataset name\n\n Returns:\n the default directory for the dataset\n '
return os.path.join(fo.config.default_dataset_dir, name)<|docstring|>Returns the default dataset directory for the dataset with the given
name.
Args:
name: the dataset name
Returns:
the default directory for the dataset<|endoftext|> |
b66fbca7661f177bd645afad970714ce8445d4f0fae99f2bbfc38ec4ba1e63fe | def delete_dataset(name, verbose=False):
'Deletes the FiftyOne dataset with the given name.\n\n If reference to the dataset exists in memory, only `Dataset.name` and\n `Dataset.deleted` will be valid attributes. Accessing any other attributes\n or methods will raise a :class:`DatasetError`\n\n If reference to a sample exists in memory, the sample\'s dataset will be\n "unset" such that `sample.in_dataset == False`\n\n Args:\n name: the name of the dataset\n verbose (False): whether to log the name of the deleted dataset\n\n Raises:\n ValueError: if the dataset is not found\n '
dataset = load_dataset(name)
dataset.delete()
if verbose:
logger.info("Dataset '%s' deleted", name) | Deletes the FiftyOne dataset with the given name.
If reference to the dataset exists in memory, only `Dataset.name` and
`Dataset.deleted` will be valid attributes. Accessing any other attributes
or methods will raise a :class:`DatasetError`
If reference to a sample exists in memory, the sample's dataset will be
"unset" such that `sample.in_dataset == False`
Args:
name: the name of the dataset
verbose (False): whether to log the name of the deleted dataset
Raises:
ValueError: if the dataset is not found | fiftyone/core/dataset.py | delete_dataset | dadounhind/fiftyone | 1 | python | def delete_dataset(name, verbose=False):
'Deletes the FiftyOne dataset with the given name.\n\n If reference to the dataset exists in memory, only `Dataset.name` and\n `Dataset.deleted` will be valid attributes. Accessing any other attributes\n or methods will raise a :class:`DatasetError`\n\n If reference to a sample exists in memory, the sample\'s dataset will be\n "unset" such that `sample.in_dataset == False`\n\n Args:\n name: the name of the dataset\n verbose (False): whether to log the name of the deleted dataset\n\n Raises:\n ValueError: if the dataset is not found\n '
dataset = load_dataset(name)
dataset.delete()
if verbose:
logger.info("Dataset '%s' deleted", name) | def delete_dataset(name, verbose=False):
'Deletes the FiftyOne dataset with the given name.\n\n If reference to the dataset exists in memory, only `Dataset.name` and\n `Dataset.deleted` will be valid attributes. Accessing any other attributes\n or methods will raise a :class:`DatasetError`\n\n If reference to a sample exists in memory, the sample\'s dataset will be\n "unset" such that `sample.in_dataset == False`\n\n Args:\n name: the name of the dataset\n verbose (False): whether to log the name of the deleted dataset\n\n Raises:\n ValueError: if the dataset is not found\n '
dataset = load_dataset(name)
dataset.delete()
if verbose:
logger.info("Dataset '%s' deleted", name)<|docstring|>Deletes the FiftyOne dataset with the given name.
If reference to the dataset exists in memory, only `Dataset.name` and
`Dataset.deleted` will be valid attributes. Accessing any other attributes
or methods will raise a :class:`DatasetError`
If reference to a sample exists in memory, the sample's dataset will be
"unset" such that `sample.in_dataset == False`
Args:
name: the name of the dataset
verbose (False): whether to log the name of the deleted dataset
Raises:
ValueError: if the dataset is not found<|endoftext|> |
a951db9bc88e9abace28fde462333f1319c9a3de0f5e644e07fa50ed2934bd46 | def delete_datasets(glob_patt, verbose=False):
'Deletes all FiftyOne datasets whose names match the given glob pattern.\n\n Args:\n glob_patt: a glob pattern of datasets to delete\n verbose (False): whether to log the names of deleted datasets\n '
all_datasets = list_datasets()
for name in fnmatch.filter(all_datasets, glob_patt):
delete_dataset(name, verbose=verbose) | Deletes all FiftyOne datasets whose names match the given glob pattern.
Args:
glob_patt: a glob pattern of datasets to delete
verbose (False): whether to log the names of deleted datasets | fiftyone/core/dataset.py | delete_datasets | dadounhind/fiftyone | 1 | python | def delete_datasets(glob_patt, verbose=False):
'Deletes all FiftyOne datasets whose names match the given glob pattern.\n\n Args:\n glob_patt: a glob pattern of datasets to delete\n verbose (False): whether to log the names of deleted datasets\n '
all_datasets = list_datasets()
for name in fnmatch.filter(all_datasets, glob_patt):
delete_dataset(name, verbose=verbose) | def delete_datasets(glob_patt, verbose=False):
'Deletes all FiftyOne datasets whose names match the given glob pattern.\n\n Args:\n glob_patt: a glob pattern of datasets to delete\n verbose (False): whether to log the names of deleted datasets\n '
all_datasets = list_datasets()
for name in fnmatch.filter(all_datasets, glob_patt):
delete_dataset(name, verbose=verbose)<|docstring|>Deletes all FiftyOne datasets whose names match the given glob pattern.
Args:
glob_patt: a glob pattern of datasets to delete
verbose (False): whether to log the names of deleted datasets<|endoftext|> |
6a914f3d6a04e626510b8d401e4aa2323b108799b862f33576d09db88f354092 | def delete_non_persistent_datasets(verbose=False):
'Deletes all non-persistent datasets.\n\n Args:\n verbose (False): whether to log the names of deleted datasets\n '
for name in list_datasets():
dataset = Dataset(name, _create=False, _migrate=False)
if ((not dataset.persistent) and (not dataset.deleted)):
dataset.delete()
if verbose:
logger.info("Dataset '%s' deleted", name) | Deletes all non-persistent datasets.
Args:
verbose (False): whether to log the names of deleted datasets | fiftyone/core/dataset.py | delete_non_persistent_datasets | dadounhind/fiftyone | 1 | python | def delete_non_persistent_datasets(verbose=False):
'Deletes all non-persistent datasets.\n\n Args:\n verbose (False): whether to log the names of deleted datasets\n '
for name in list_datasets():
dataset = Dataset(name, _create=False, _migrate=False)
if ((not dataset.persistent) and (not dataset.deleted)):
dataset.delete()
if verbose:
logger.info("Dataset '%s' deleted", name) | def delete_non_persistent_datasets(verbose=False):
'Deletes all non-persistent datasets.\n\n Args:\n verbose (False): whether to log the names of deleted datasets\n '
for name in list_datasets():
dataset = Dataset(name, _create=False, _migrate=False)
if ((not dataset.persistent) and (not dataset.deleted)):
dataset.delete()
if verbose:
logger.info("Dataset '%s' deleted", name)<|docstring|>Deletes all non-persistent datasets.
Args:
verbose (False): whether to log the names of deleted datasets<|endoftext|> |
f89ca544dc4efdd0ab3d50ebb97c8f09a7c2024cc30d95ec679a58455c2b31b2 | @property
def media_type(self):
'The media type of the dataset.'
return self._doc.media_type | The media type of the dataset. | fiftyone/core/dataset.py | media_type | dadounhind/fiftyone | 1 | python | @property
def media_type(self):
return self._doc.media_type | @property
def media_type(self):
return self._doc.media_type<|docstring|>The media type of the dataset.<|endoftext|> |
2c2f99947378e8f02fdeea52d160f4dc635cc8f95fc75d0ca9e4d1b76c300616 | @property
def version(self):
'The version of the ``fiftyone`` package for which the dataset is\n formatted.\n '
return self._doc.version | The version of the ``fiftyone`` package for which the dataset is
formatted. | fiftyone/core/dataset.py | version | dadounhind/fiftyone | 1 | python | @property
def version(self):
'The version of the ``fiftyone`` package for which the dataset is\n formatted.\n '
return self._doc.version | @property
def version(self):
'The version of the ``fiftyone`` package for which the dataset is\n formatted.\n '
return self._doc.version<|docstring|>The version of the ``fiftyone`` package for which the dataset is
formatted.<|endoftext|> |
d35723a04774dd7cbe5d67ca35051eb14fefe0a933a87a1ab9e92a39bd6190ff | @property
def name(self):
'The name of the dataset.'
return self._doc.name | The name of the dataset. | fiftyone/core/dataset.py | name | dadounhind/fiftyone | 1 | python | @property
def name(self):
return self._doc.name | @property
def name(self):
return self._doc.name<|docstring|>The name of the dataset.<|endoftext|> |
cbdf4163e1bbca6ad54c7b68eb8201d812a904ac330a0f7942f4f745eecef78c | @property
def persistent(self):
'Whether the dataset persists in the database after a session is\n terminated.\n '
return self._doc.persistent | Whether the dataset persists in the database after a session is
terminated. | fiftyone/core/dataset.py | persistent | dadounhind/fiftyone | 1 | python | @property
def persistent(self):
'Whether the dataset persists in the database after a session is\n terminated.\n '
return self._doc.persistent | @property
def persistent(self):
'Whether the dataset persists in the database after a session is\n terminated.\n '
return self._doc.persistent<|docstring|>Whether the dataset persists in the database after a session is
terminated.<|endoftext|> |
b87c70beb7fdb0aa26c901e2d6b76029a531e8a4b7ff3692e3908cbaa721423a | @property
def info(self):
'A user-facing dictionary of information about the dataset.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Store a class list in the dataset\'s info\n dataset.info = {"classes": ["cat", "dog"]}\n\n # Edit the info\n dataset.info["other_classes"] = ["bird", "plane"]\n dataset.save() # must save after edits\n '
return self._doc.info | A user-facing dictionary of information about the dataset.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Store a class list in the dataset's info
dataset.info = {"classes": ["cat", "dog"]}
# Edit the info
dataset.info["other_classes"] = ["bird", "plane"]
dataset.save() # must save after edits | fiftyone/core/dataset.py | info | dadounhind/fiftyone | 1 | python | @property
def info(self):
'A user-facing dictionary of information about the dataset.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Store a class list in the dataset\'s info\n dataset.info = {"classes": ["cat", "dog"]}\n\n # Edit the info\n dataset.info["other_classes"] = ["bird", "plane"]\n dataset.save() # must save after edits\n '
return self._doc.info | @property
def info(self):
'A user-facing dictionary of information about the dataset.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Store a class list in the dataset\'s info\n dataset.info = {"classes": ["cat", "dog"]}\n\n # Edit the info\n dataset.info["other_classes"] = ["bird", "plane"]\n dataset.save() # must save after edits\n '
return self._doc.info<|docstring|>A user-facing dictionary of information about the dataset.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Store a class list in the dataset's info
dataset.info = {"classes": ["cat", "dog"]}
# Edit the info
dataset.info["other_classes"] = ["bird", "plane"]
dataset.save() # must save after edits<|endoftext|> |
1b904f90d47f7625257b0391de10b674570094231166f3bcd937afa6ba9c5f5b | @property
def classes(self):
'A dict mapping field names to list of class label strings for the\n corresponding fields of the dataset.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set classes for the `ground_truth` and `predictions` fields\n dataset.classes = {\n "ground_truth": ["cat", "dog"],\n "predictions": ["cat", "dog", "other"],\n }\n\n # Edit an existing classes list\n dataset.classes["ground_truth"].append("other")\n dataset.save() # must save after edits\n '
return self._doc.classes | A dict mapping field names to list of class label strings for the
corresponding fields of the dataset.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Set classes for the `ground_truth` and `predictions` fields
dataset.classes = {
"ground_truth": ["cat", "dog"],
"predictions": ["cat", "dog", "other"],
}
# Edit an existing classes list
dataset.classes["ground_truth"].append("other")
dataset.save() # must save after edits | fiftyone/core/dataset.py | classes | dadounhind/fiftyone | 1 | python | @property
def classes(self):
'A dict mapping field names to list of class label strings for the\n corresponding fields of the dataset.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set classes for the `ground_truth` and `predictions` fields\n dataset.classes = {\n "ground_truth": ["cat", "dog"],\n "predictions": ["cat", "dog", "other"],\n }\n\n # Edit an existing classes list\n dataset.classes["ground_truth"].append("other")\n dataset.save() # must save after edits\n '
return self._doc.classes | @property
def classes(self):
'A dict mapping field names to list of class label strings for the\n corresponding fields of the dataset.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set classes for the `ground_truth` and `predictions` fields\n dataset.classes = {\n "ground_truth": ["cat", "dog"],\n "predictions": ["cat", "dog", "other"],\n }\n\n # Edit an existing classes list\n dataset.classes["ground_truth"].append("other")\n dataset.save() # must save after edits\n '
return self._doc.classes<|docstring|>A dict mapping field names to list of class label strings for the
corresponding fields of the dataset.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Set classes for the `ground_truth` and `predictions` fields
dataset.classes = {
"ground_truth": ["cat", "dog"],
"predictions": ["cat", "dog", "other"],
}
# Edit an existing classes list
dataset.classes["ground_truth"].append("other")
dataset.save() # must save after edits<|endoftext|> |
4d034e5211a02033b49059e98f45a4d8e18ef6c55773af734d369cd9a2b0c26d | @property
def default_classes(self):
'A list of class label strings for all\n :class:`fiftyone.core.labels.Label` fields of this dataset that do not\n have customized classes defined in :meth:`classes`.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set default classes\n dataset.default_classes = ["cat", "dog"]\n\n # Edit the default classes\n dataset.default_classes.append("rabbit")\n dataset.save() # must save after edits\n '
return self._doc.default_classes | A list of class label strings for all
:class:`fiftyone.core.labels.Label` fields of this dataset that do not
have customized classes defined in :meth:`classes`.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Set default classes
dataset.default_classes = ["cat", "dog"]
# Edit the default classes
dataset.default_classes.append("rabbit")
dataset.save() # must save after edits | fiftyone/core/dataset.py | default_classes | dadounhind/fiftyone | 1 | python | @property
def default_classes(self):
'A list of class label strings for all\n :class:`fiftyone.core.labels.Label` fields of this dataset that do not\n have customized classes defined in :meth:`classes`.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set default classes\n dataset.default_classes = ["cat", "dog"]\n\n # Edit the default classes\n dataset.default_classes.append("rabbit")\n dataset.save() # must save after edits\n '
return self._doc.default_classes | @property
def default_classes(self):
'A list of class label strings for all\n :class:`fiftyone.core.labels.Label` fields of this dataset that do not\n have customized classes defined in :meth:`classes`.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set default classes\n dataset.default_classes = ["cat", "dog"]\n\n # Edit the default classes\n dataset.default_classes.append("rabbit")\n dataset.save() # must save after edits\n '
return self._doc.default_classes<|docstring|>A list of class label strings for all
:class:`fiftyone.core.labels.Label` fields of this dataset that do not
have customized classes defined in :meth:`classes`.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Set default classes
dataset.default_classes = ["cat", "dog"]
# Edit the default classes
dataset.default_classes.append("rabbit")
dataset.save() # must save after edits<|endoftext|> |
bc13a9db0f1786e5038234d310319f18c59fa541440c8d7b97e08464e88da04c | @property
def mask_targets(self):
'A dict mapping field names to mask target dicts, each of which\n defines a mapping between pixel values and label strings for the\n segmentation masks in the corresponding field of the dataset.\n\n .. note::\n\n The pixel value `0` is a reserved "background" class that is\n rendered as invislble in the App.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set mask targets for the `ground_truth` and `predictions` fields\n dataset.mask_targets = {\n "ground_truth": {1: "cat", 2: "dog"},\n "predictions": {1: "cat", 2: "dog", 255: "other"},\n }\n\n # Edit an existing mask target\n dataset.mask_targets["ground_truth"][255] = "other"\n dataset.save() # must save after edits\n '
return self._doc.mask_targets | A dict mapping field names to mask target dicts, each of which
defines a mapping between pixel values and label strings for the
segmentation masks in the corresponding field of the dataset.
.. note::
The pixel value `0` is a reserved "background" class that is
rendered as invislble in the App.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Set mask targets for the `ground_truth` and `predictions` fields
dataset.mask_targets = {
"ground_truth": {1: "cat", 2: "dog"},
"predictions": {1: "cat", 2: "dog", 255: "other"},
}
# Edit an existing mask target
dataset.mask_targets["ground_truth"][255] = "other"
dataset.save() # must save after edits | fiftyone/core/dataset.py | mask_targets | dadounhind/fiftyone | 1 | python | @property
def mask_targets(self):
'A dict mapping field names to mask target dicts, each of which\n defines a mapping between pixel values and label strings for the\n segmentation masks in the corresponding field of the dataset.\n\n .. note::\n\n The pixel value `0` is a reserved "background" class that is\n rendered as invislble in the App.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set mask targets for the `ground_truth` and `predictions` fields\n dataset.mask_targets = {\n "ground_truth": {1: "cat", 2: "dog"},\n "predictions": {1: "cat", 2: "dog", 255: "other"},\n }\n\n # Edit an existing mask target\n dataset.mask_targets["ground_truth"][255] = "other"\n dataset.save() # must save after edits\n '
return self._doc.mask_targets | @property
def mask_targets(self):
'A dict mapping field names to mask target dicts, each of which\n defines a mapping between pixel values and label strings for the\n segmentation masks in the corresponding field of the dataset.\n\n .. note::\n\n The pixel value `0` is a reserved "background" class that is\n rendered as invislble in the App.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set mask targets for the `ground_truth` and `predictions` fields\n dataset.mask_targets = {\n "ground_truth": {1: "cat", 2: "dog"},\n "predictions": {1: "cat", 2: "dog", 255: "other"},\n }\n\n # Edit an existing mask target\n dataset.mask_targets["ground_truth"][255] = "other"\n dataset.save() # must save after edits\n '
return self._doc.mask_targets<|docstring|>A dict mapping field names to mask target dicts, each of which
defines a mapping between pixel values and label strings for the
segmentation masks in the corresponding field of the dataset.
.. note::
The pixel value `0` is a reserved "background" class that is
rendered as invislble in the App.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Set mask targets for the `ground_truth` and `predictions` fields
dataset.mask_targets = {
"ground_truth": {1: "cat", 2: "dog"},
"predictions": {1: "cat", 2: "dog", 255: "other"},
}
# Edit an existing mask target
dataset.mask_targets["ground_truth"][255] = "other"
dataset.save() # must save after edits<|endoftext|> |
244df09932a069c585a33f5c0a92438cc84dd8a5a3ff0c566a14f108b8d7d4bd | @property
def default_mask_targets(self):
'A dict defining a default mapping between pixel values and label\n strings for the segmentation masks of all\n :class:`fiftyone.core.labels.Segmentation` fields of this dataset that\n do not have customized mask targets defined in :meth:`mask_targets`.\n\n .. note::\n\n The pixel value `0` is a reserved "background" class that is\n rendered as invislble in the App.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set default mask targets\n dataset.default_mask_targets = {1: "cat", 2: "dog"}\n\n # Edit the default mask targets\n dataset.default_mask_targets[255] = "other"\n dataset.save() # must save after edits\n '
return self._doc.default_mask_targets | A dict defining a default mapping between pixel values and label
strings for the segmentation masks of all
:class:`fiftyone.core.labels.Segmentation` fields of this dataset that
do not have customized mask targets defined in :meth:`mask_targets`.
.. note::
The pixel value `0` is a reserved "background" class that is
rendered as invislble in the App.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Set default mask targets
dataset.default_mask_targets = {1: "cat", 2: "dog"}
# Edit the default mask targets
dataset.default_mask_targets[255] = "other"
dataset.save() # must save after edits | fiftyone/core/dataset.py | default_mask_targets | dadounhind/fiftyone | 1 | python | @property
def default_mask_targets(self):
'A dict defining a default mapping between pixel values and label\n strings for the segmentation masks of all\n :class:`fiftyone.core.labels.Segmentation` fields of this dataset that\n do not have customized mask targets defined in :meth:`mask_targets`.\n\n .. note::\n\n The pixel value `0` is a reserved "background" class that is\n rendered as invislble in the App.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set default mask targets\n dataset.default_mask_targets = {1: "cat", 2: "dog"}\n\n # Edit the default mask targets\n dataset.default_mask_targets[255] = "other"\n dataset.save() # must save after edits\n '
return self._doc.default_mask_targets | @property
def default_mask_targets(self):
'A dict defining a default mapping between pixel values and label\n strings for the segmentation masks of all\n :class:`fiftyone.core.labels.Segmentation` fields of this dataset that\n do not have customized mask targets defined in :meth:`mask_targets`.\n\n .. note::\n\n The pixel value `0` is a reserved "background" class that is\n rendered as invislble in the App.\n\n Examples::\n\n import fiftyone as fo\n\n dataset = fo.Dataset()\n\n # Set default mask targets\n dataset.default_mask_targets = {1: "cat", 2: "dog"}\n\n # Edit the default mask targets\n dataset.default_mask_targets[255] = "other"\n dataset.save() # must save after edits\n '
return self._doc.default_mask_targets<|docstring|>A dict defining a default mapping between pixel values and label
strings for the segmentation masks of all
:class:`fiftyone.core.labels.Segmentation` fields of this dataset that
do not have customized mask targets defined in :meth:`mask_targets`.
.. note::
The pixel value `0` is a reserved "background" class that is
rendered as invislble in the App.
Examples::
import fiftyone as fo
dataset = fo.Dataset()
# Set default mask targets
dataset.default_mask_targets = {1: "cat", 2: "dog"}
# Edit the default mask targets
dataset.default_mask_targets[255] = "other"
dataset.save() # must save after edits<|endoftext|> |
2ac494052d27779b0b36fe704b44c7b095150fa060a55f831f5851e3adbd48c1 | @property
def deleted(self):
'Whether the dataset is deleted.'
return self._deleted | Whether the dataset is deleted. | fiftyone/core/dataset.py | deleted | dadounhind/fiftyone | 1 | python | @property
def deleted(self):
return self._deleted | @property
def deleted(self):
return self._deleted<|docstring|>Whether the dataset is deleted.<|endoftext|> |
fa114f7f67c92eee30e98e2c09c0c1eab5b4b8439b11a8b53fabf9e8d015767b | def summary(self):
'Returns a string summary of the dataset.\n\n Returns:\n a string summary\n '
aggs = self.aggregate([foa.Count(), foa.Distinct('tags')], _attach_frames=False)
elements = [('Name: %s' % self.name), ('Media type: %s' % self.media_type), ('Num samples: %d' % aggs[0]), ('Persistent: %s' % self.persistent), ('Tags: %s' % aggs[1]), 'Sample fields:', self._to_fields_str(self.get_field_schema())]
if (self.media_type == fom.VIDEO):
elements.extend(['Frame fields:', self._to_fields_str(self.get_frame_field_schema())])
return '\n'.join(elements) | Returns a string summary of the dataset.
Returns:
a string summary | fiftyone/core/dataset.py | summary | dadounhind/fiftyone | 1 | python | def summary(self):
'Returns a string summary of the dataset.\n\n Returns:\n a string summary\n '
aggs = self.aggregate([foa.Count(), foa.Distinct('tags')], _attach_frames=False)
elements = [('Name: %s' % self.name), ('Media type: %s' % self.media_type), ('Num samples: %d' % aggs[0]), ('Persistent: %s' % self.persistent), ('Tags: %s' % aggs[1]), 'Sample fields:', self._to_fields_str(self.get_field_schema())]
if (self.media_type == fom.VIDEO):
elements.extend(['Frame fields:', self._to_fields_str(self.get_frame_field_schema())])
return '\n'.join(elements) | def summary(self):
'Returns a string summary of the dataset.\n\n Returns:\n a string summary\n '
aggs = self.aggregate([foa.Count(), foa.Distinct('tags')], _attach_frames=False)
elements = [('Name: %s' % self.name), ('Media type: %s' % self.media_type), ('Num samples: %d' % aggs[0]), ('Persistent: %s' % self.persistent), ('Tags: %s' % aggs[1]), 'Sample fields:', self._to_fields_str(self.get_field_schema())]
if (self.media_type == fom.VIDEO):
elements.extend(['Frame fields:', self._to_fields_str(self.get_frame_field_schema())])
return '\n'.join(elements)<|docstring|>Returns a string summary of the dataset.
Returns:
a string summary<|endoftext|> |
ba40b72b5b5d19259a9798d4f42faea20e363988b3e8b268480ebd60431a93b6 | def stats(self, include_media=False, compressed=False):
'Returns stats about the dataset on disk.\n\n The ``samples`` keys refer to the sample-level labels for the dataset\n as they are stored in the database.\n\n The ``media`` keys refer to the raw media associated with each sample\n in the dataset on disk (only included if ``include_media`` is True).\n\n The ``frames`` keys refer to the frame labels for the dataset as they\n are stored in the database (video datasets only).\n\n Args:\n include_media (False): whether to include stats about the size of\n the raw media in the dataset\n compressed (False): whether to return the sizes of collections in\n their compressed form on disk (True) or the logical\n uncompressed size of the collections (False)\n\n Returns:\n a stats dict\n '
stats = {}
conn = foo.get_db_conn()
cs = conn.command('collstats', self._sample_collection_name)
samples_bytes = (cs['storageSize'] if compressed else cs['size'])
stats['samples_count'] = cs['count']
stats['samples_bytes'] = samples_bytes
stats['samples_size'] = etau.to_human_bytes_str(samples_bytes)
total_bytes = samples_bytes
if (self.media_type == fom.VIDEO):
cs = conn.command('collstats', self._frame_collection_name)
frames_bytes = (cs['storageSize'] if compressed else cs['size'])
stats['frames_count'] = cs['count']
stats['frames_bytes'] = frames_bytes
stats['frames_size'] = etau.to_human_bytes_str(frames_bytes)
total_bytes += frames_bytes
if include_media:
self.compute_metadata()
media_bytes = self.sum('metadata.size_bytes')
stats['media_bytes'] = media_bytes
stats['media_size'] = etau.to_human_bytes_str(media_bytes)
total_bytes += media_bytes
stats['total_bytes'] = total_bytes
stats['total_size'] = etau.to_human_bytes_str(total_bytes)
return stats | Returns stats about the dataset on disk.
The ``samples`` keys refer to the sample-level labels for the dataset
as they are stored in the database.
The ``media`` keys refer to the raw media associated with each sample
in the dataset on disk (only included if ``include_media`` is True).
The ``frames`` keys refer to the frame labels for the dataset as they
are stored in the database (video datasets only).
Args:
include_media (False): whether to include stats about the size of
the raw media in the dataset
compressed (False): whether to return the sizes of collections in
their compressed form on disk (True) or the logical
uncompressed size of the collections (False)
Returns:
a stats dict | fiftyone/core/dataset.py | stats | dadounhind/fiftyone | 1 | python | def stats(self, include_media=False, compressed=False):
'Returns stats about the dataset on disk.\n\n The ``samples`` keys refer to the sample-level labels for the dataset\n as they are stored in the database.\n\n The ``media`` keys refer to the raw media associated with each sample\n in the dataset on disk (only included if ``include_media`` is True).\n\n The ``frames`` keys refer to the frame labels for the dataset as they\n are stored in the database (video datasets only).\n\n Args:\n include_media (False): whether to include stats about the size of\n the raw media in the dataset\n compressed (False): whether to return the sizes of collections in\n their compressed form on disk (True) or the logical\n uncompressed size of the collections (False)\n\n Returns:\n a stats dict\n '
stats = {}
conn = foo.get_db_conn()
cs = conn.command('collstats', self._sample_collection_name)
samples_bytes = (cs['storageSize'] if compressed else cs['size'])
stats['samples_count'] = cs['count']
stats['samples_bytes'] = samples_bytes
stats['samples_size'] = etau.to_human_bytes_str(samples_bytes)
total_bytes = samples_bytes
if (self.media_type == fom.VIDEO):
cs = conn.command('collstats', self._frame_collection_name)
frames_bytes = (cs['storageSize'] if compressed else cs['size'])
stats['frames_count'] = cs['count']
stats['frames_bytes'] = frames_bytes
stats['frames_size'] = etau.to_human_bytes_str(frames_bytes)
total_bytes += frames_bytes
if include_media:
self.compute_metadata()
media_bytes = self.sum('metadata.size_bytes')
stats['media_bytes'] = media_bytes
stats['media_size'] = etau.to_human_bytes_str(media_bytes)
total_bytes += media_bytes
stats['total_bytes'] = total_bytes
stats['total_size'] = etau.to_human_bytes_str(total_bytes)
return stats | def stats(self, include_media=False, compressed=False):
'Returns stats about the dataset on disk.\n\n The ``samples`` keys refer to the sample-level labels for the dataset\n as they are stored in the database.\n\n The ``media`` keys refer to the raw media associated with each sample\n in the dataset on disk (only included if ``include_media`` is True).\n\n The ``frames`` keys refer to the frame labels for the dataset as they\n are stored in the database (video datasets only).\n\n Args:\n include_media (False): whether to include stats about the size of\n the raw media in the dataset\n compressed (False): whether to return the sizes of collections in\n their compressed form on disk (True) or the logical\n uncompressed size of the collections (False)\n\n Returns:\n a stats dict\n '
stats = {}
conn = foo.get_db_conn()
cs = conn.command('collstats', self._sample_collection_name)
samples_bytes = (cs['storageSize'] if compressed else cs['size'])
stats['samples_count'] = cs['count']
stats['samples_bytes'] = samples_bytes
stats['samples_size'] = etau.to_human_bytes_str(samples_bytes)
total_bytes = samples_bytes
if (self.media_type == fom.VIDEO):
cs = conn.command('collstats', self._frame_collection_name)
frames_bytes = (cs['storageSize'] if compressed else cs['size'])
stats['frames_count'] = cs['count']
stats['frames_bytes'] = frames_bytes
stats['frames_size'] = etau.to_human_bytes_str(frames_bytes)
total_bytes += frames_bytes
if include_media:
self.compute_metadata()
media_bytes = self.sum('metadata.size_bytes')
stats['media_bytes'] = media_bytes
stats['media_size'] = etau.to_human_bytes_str(media_bytes)
total_bytes += media_bytes
stats['total_bytes'] = total_bytes
stats['total_size'] = etau.to_human_bytes_str(total_bytes)
return stats<|docstring|>Returns stats about the dataset on disk.
The ``samples`` keys refer to the sample-level labels for the dataset
as they are stored in the database.
The ``media`` keys refer to the raw media associated with each sample
in the dataset on disk (only included if ``include_media`` is True).
The ``frames`` keys refer to the frame labels for the dataset as they
are stored in the database (video datasets only).
Args:
include_media (False): whether to include stats about the size of
the raw media in the dataset
compressed (False): whether to return the sizes of collections in
their compressed form on disk (True) or the logical
uncompressed size of the collections (False)
Returns:
a stats dict<|endoftext|> |
2dbedfb4f3f43d7457c66814316474f6d6118bba09abcdbed6f4a635b5779091 | def first(self):
'Returns the first sample in the dataset.\n\n Returns:\n a :class:`fiftyone.core.sample.Sample`\n\n Raises:\n ValueError: if the dataset is empty\n '
return super().first() | Returns the first sample in the dataset.
Returns:
a :class:`fiftyone.core.sample.Sample`
Raises:
ValueError: if the dataset is empty | fiftyone/core/dataset.py | first | dadounhind/fiftyone | 1 | python | def first(self):
'Returns the first sample in the dataset.\n\n Returns:\n a :class:`fiftyone.core.sample.Sample`\n\n Raises:\n ValueError: if the dataset is empty\n '
return super().first() | def first(self):
'Returns the first sample in the dataset.\n\n Returns:\n a :class:`fiftyone.core.sample.Sample`\n\n Raises:\n ValueError: if the dataset is empty\n '
return super().first()<|docstring|>Returns the first sample in the dataset.
Returns:
a :class:`fiftyone.core.sample.Sample`
Raises:
ValueError: if the dataset is empty<|endoftext|> |
b8026052a33b82d2cca9dd6e65b185467e41b84b67978bbf8aab2b10b8a591f2 | def last(self):
'Returns the last sample in the dataset.\n\n Returns:\n a :class:`fiftyone.core.sample.Sample`\n\n Raises:\n ValueError: if the dataset is empty\n '
try:
sample_view = self[(- 1):].first()
except ValueError:
raise ValueError(('%s is empty' % self.__class__.__name__))
return fos.Sample.from_doc(sample_view._doc, dataset=self) | Returns the last sample in the dataset.
Returns:
a :class:`fiftyone.core.sample.Sample`
Raises:
ValueError: if the dataset is empty | fiftyone/core/dataset.py | last | dadounhind/fiftyone | 1 | python | def last(self):
'Returns the last sample in the dataset.\n\n Returns:\n a :class:`fiftyone.core.sample.Sample`\n\n Raises:\n ValueError: if the dataset is empty\n '
try:
sample_view = self[(- 1):].first()
except ValueError:
raise ValueError(('%s is empty' % self.__class__.__name__))
return fos.Sample.from_doc(sample_view._doc, dataset=self) | def last(self):
'Returns the last sample in the dataset.\n\n Returns:\n a :class:`fiftyone.core.sample.Sample`\n\n Raises:\n ValueError: if the dataset is empty\n '
try:
sample_view = self[(- 1):].first()
except ValueError:
raise ValueError(('%s is empty' % self.__class__.__name__))
return fos.Sample.from_doc(sample_view._doc, dataset=self)<|docstring|>Returns the last sample in the dataset.
Returns:
a :class:`fiftyone.core.sample.Sample`
Raises:
ValueError: if the dataset is empty<|endoftext|> |
7717ae32f942232bf491bf72850bef31634b8e8dfc60947acb6a20f722d16560 | def head(self, num_samples=3):
'Returns a list of the first few samples in the dataset.\n\n If fewer than ``num_samples`` samples are in the dataset, only the\n available samples are returned.\n\n Args:\n num_samples (3): the number of samples\n\n Returns:\n a list of :class:`fiftyone.core.sample.Sample` objects\n '
return [fos.Sample.from_doc(sv._doc, dataset=self) for sv in self[:num_samples]] | Returns a list of the first few samples in the dataset.
If fewer than ``num_samples`` samples are in the dataset, only the
available samples are returned.
Args:
num_samples (3): the number of samples
Returns:
a list of :class:`fiftyone.core.sample.Sample` objects | fiftyone/core/dataset.py | head | dadounhind/fiftyone | 1 | python | def head(self, num_samples=3):
'Returns a list of the first few samples in the dataset.\n\n If fewer than ``num_samples`` samples are in the dataset, only the\n available samples are returned.\n\n Args:\n num_samples (3): the number of samples\n\n Returns:\n a list of :class:`fiftyone.core.sample.Sample` objects\n '
return [fos.Sample.from_doc(sv._doc, dataset=self) for sv in self[:num_samples]] | def head(self, num_samples=3):
'Returns a list of the first few samples in the dataset.\n\n If fewer than ``num_samples`` samples are in the dataset, only the\n available samples are returned.\n\n Args:\n num_samples (3): the number of samples\n\n Returns:\n a list of :class:`fiftyone.core.sample.Sample` objects\n '
return [fos.Sample.from_doc(sv._doc, dataset=self) for sv in self[:num_samples]]<|docstring|>Returns a list of the first few samples in the dataset.
If fewer than ``num_samples`` samples are in the dataset, only the
available samples are returned.
Args:
num_samples (3): the number of samples
Returns:
a list of :class:`fiftyone.core.sample.Sample` objects<|endoftext|> |
8a635ba8c288c95a76806a46ef802c81c033062e6a93b159839bd5d94b216820 | def tail(self, num_samples=3):
'Returns a list of the last few samples in the dataset.\n\n If fewer than ``num_samples`` samples are in the dataset, only the\n available samples are returned.\n\n Args:\n num_samples (3): the number of samples\n\n Returns:\n a list of :class:`fiftyone.core.sample.Sample` objects\n '
return [fos.Sample.from_doc(sv._doc, dataset=self) for sv in self[(- num_samples):]] | Returns a list of the last few samples in the dataset.
If fewer than ``num_samples`` samples are in the dataset, only the
available samples are returned.
Args:
num_samples (3): the number of samples
Returns:
a list of :class:`fiftyone.core.sample.Sample` objects | fiftyone/core/dataset.py | tail | dadounhind/fiftyone | 1 | python | def tail(self, num_samples=3):
'Returns a list of the last few samples in the dataset.\n\n If fewer than ``num_samples`` samples are in the dataset, only the\n available samples are returned.\n\n Args:\n num_samples (3): the number of samples\n\n Returns:\n a list of :class:`fiftyone.core.sample.Sample` objects\n '
return [fos.Sample.from_doc(sv._doc, dataset=self) for sv in self[(- num_samples):]] | def tail(self, num_samples=3):
'Returns a list of the last few samples in the dataset.\n\n If fewer than ``num_samples`` samples are in the dataset, only the\n available samples are returned.\n\n Args:\n num_samples (3): the number of samples\n\n Returns:\n a list of :class:`fiftyone.core.sample.Sample` objects\n '
return [fos.Sample.from_doc(sv._doc, dataset=self) for sv in self[(- num_samples):]]<|docstring|>Returns a list of the last few samples in the dataset.
If fewer than ``num_samples`` samples are in the dataset, only the
available samples are returned.
Args:
num_samples (3): the number of samples
Returns:
a list of :class:`fiftyone.core.sample.Sample` objects<|endoftext|> |
2f21f3ba55d60f16fb9d06c071f033eb9c1bac556ac33c2e32af25b92907666a | def view(self):
'Returns a :class:`fiftyone.core.view.DatasetView` containing the\n entire dataset.\n\n Returns:\n a :class:`fiftyone.core.view.DatasetView`\n '
return fov.DatasetView(self) | Returns a :class:`fiftyone.core.view.DatasetView` containing the
entire dataset.
Returns:
a :class:`fiftyone.core.view.DatasetView` | fiftyone/core/dataset.py | view | dadounhind/fiftyone | 1 | python | def view(self):
'Returns a :class:`fiftyone.core.view.DatasetView` containing the\n entire dataset.\n\n Returns:\n a :class:`fiftyone.core.view.DatasetView`\n '
return fov.DatasetView(self) | def view(self):
'Returns a :class:`fiftyone.core.view.DatasetView` containing the\n entire dataset.\n\n Returns:\n a :class:`fiftyone.core.view.DatasetView`\n '
return fov.DatasetView(self)<|docstring|>Returns a :class:`fiftyone.core.view.DatasetView` containing the
entire dataset.
Returns:
a :class:`fiftyone.core.view.DatasetView`<|endoftext|> |
3dcb019c429c8f63e3dc3c61766c3d8cba3c46a804a89df08bee1cd273c33116 | @classmethod
def get_default_sample_fields(cls, include_private=False):
'Gets the default fields present on all :class:`Dataset` instances.\n\n Args:\n include_private (False): whether or not to return fields prefixed\n with a `_`\n\n Returns:\n a tuple of field names\n '
return fos.get_default_sample_fields(include_private=include_private) | Gets the default fields present on all :class:`Dataset` instances.
Args:
include_private (False): whether or not to return fields prefixed
with a `_`
Returns:
a tuple of field names | fiftyone/core/dataset.py | get_default_sample_fields | dadounhind/fiftyone | 1 | python | @classmethod
def get_default_sample_fields(cls, include_private=False):
'Gets the default fields present on all :class:`Dataset` instances.\n\n Args:\n include_private (False): whether or not to return fields prefixed\n with a `_`\n\n Returns:\n a tuple of field names\n '
return fos.get_default_sample_fields(include_private=include_private) | @classmethod
def get_default_sample_fields(cls, include_private=False):
'Gets the default fields present on all :class:`Dataset` instances.\n\n Args:\n include_private (False): whether or not to return fields prefixed\n with a `_`\n\n Returns:\n a tuple of field names\n '
return fos.get_default_sample_fields(include_private=include_private)<|docstring|>Gets the default fields present on all :class:`Dataset` instances.
Args:
include_private (False): whether or not to return fields prefixed
with a `_`
Returns:
a tuple of field names<|endoftext|> |
b615926be9227fb172c64f3bf9db0df1fb67e026a224e6bf2a380c301333fdcf | @classmethod
def get_default_frame_fields(cls, include_private=False):
'Gets the default fields present on all\n :class:`fiftyone.core.frame.Frame` instances.\n\n Args:\n include_private (False): whether or not to return fields prefixed\n with a `_`\n\n Returns:\n a tuple of field names\n '
return fofr.get_default_frame_fields(include_private=include_private) | Gets the default fields present on all
:class:`fiftyone.core.frame.Frame` instances.
Args:
include_private (False): whether or not to return fields prefixed
with a `_`
Returns:
a tuple of field names | fiftyone/core/dataset.py | get_default_frame_fields | dadounhind/fiftyone | 1 | python | @classmethod
def get_default_frame_fields(cls, include_private=False):
'Gets the default fields present on all\n :class:`fiftyone.core.frame.Frame` instances.\n\n Args:\n include_private (False): whether or not to return fields prefixed\n with a `_`\n\n Returns:\n a tuple of field names\n '
return fofr.get_default_frame_fields(include_private=include_private) | @classmethod
def get_default_frame_fields(cls, include_private=False):
'Gets the default fields present on all\n :class:`fiftyone.core.frame.Frame` instances.\n\n Args:\n include_private (False): whether or not to return fields prefixed\n with a `_`\n\n Returns:\n a tuple of field names\n '
return fofr.get_default_frame_fields(include_private=include_private)<|docstring|>Gets the default fields present on all
:class:`fiftyone.core.frame.Frame` instances.
Args:
include_private (False): whether or not to return fields prefixed
with a `_`
Returns:
a tuple of field names<|endoftext|> |
92f83f6efabde0b4a5557dbae3f30b87d0c6cabc978a8815c928b3eba5f038c8 | def get_field_schema(self, ftype=None, embedded_doc_type=None, include_private=False):
'Returns a schema dictionary describing the fields of the samples in\n the dataset.\n\n Args:\n ftype (None): an optional field type to which to restrict the\n returned schema. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): an optional embedded document type to\n which to restrict the returned schema. Must be a subclass of\n :class:`fiftyone.core.odm.BaseEmbeddedDocument`\n include_private (False): whether to include fields that start with\n `_` in the returned schema\n\n Returns:\n an ``OrderedDict`` mapping field names to field types\n '
d = self._sample_doc_cls.get_field_schema(ftype=ftype, embedded_doc_type=embedded_doc_type, include_private=include_private)
if ((not include_private) and (self.media_type == fom.VIDEO)):
d.pop('frames', None)
return d | Returns a schema dictionary describing the fields of the samples in
the dataset.
Args:
ftype (None): an optional field type to which to restrict the
returned schema. Must be a subclass of
:class:`fiftyone.core.fields.Field`
embedded_doc_type (None): an optional embedded document type to
which to restrict the returned schema. Must be a subclass of
:class:`fiftyone.core.odm.BaseEmbeddedDocument`
include_private (False): whether to include fields that start with
`_` in the returned schema
Returns:
an ``OrderedDict`` mapping field names to field types | fiftyone/core/dataset.py | get_field_schema | dadounhind/fiftyone | 1 | python | def get_field_schema(self, ftype=None, embedded_doc_type=None, include_private=False):
'Returns a schema dictionary describing the fields of the samples in\n the dataset.\n\n Args:\n ftype (None): an optional field type to which to restrict the\n returned schema. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): an optional embedded document type to\n which to restrict the returned schema. Must be a subclass of\n :class:`fiftyone.core.odm.BaseEmbeddedDocument`\n include_private (False): whether to include fields that start with\n `_` in the returned schema\n\n Returns:\n an ``OrderedDict`` mapping field names to field types\n '
d = self._sample_doc_cls.get_field_schema(ftype=ftype, embedded_doc_type=embedded_doc_type, include_private=include_private)
if ((not include_private) and (self.media_type == fom.VIDEO)):
d.pop('frames', None)
return d | def get_field_schema(self, ftype=None, embedded_doc_type=None, include_private=False):
'Returns a schema dictionary describing the fields of the samples in\n the dataset.\n\n Args:\n ftype (None): an optional field type to which to restrict the\n returned schema. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): an optional embedded document type to\n which to restrict the returned schema. Must be a subclass of\n :class:`fiftyone.core.odm.BaseEmbeddedDocument`\n include_private (False): whether to include fields that start with\n `_` in the returned schema\n\n Returns:\n an ``OrderedDict`` mapping field names to field types\n '
d = self._sample_doc_cls.get_field_schema(ftype=ftype, embedded_doc_type=embedded_doc_type, include_private=include_private)
if ((not include_private) and (self.media_type == fom.VIDEO)):
d.pop('frames', None)
return d<|docstring|>Returns a schema dictionary describing the fields of the samples in
the dataset.
Args:
ftype (None): an optional field type to which to restrict the
returned schema. Must be a subclass of
:class:`fiftyone.core.fields.Field`
embedded_doc_type (None): an optional embedded document type to
which to restrict the returned schema. Must be a subclass of
:class:`fiftyone.core.odm.BaseEmbeddedDocument`
include_private (False): whether to include fields that start with
`_` in the returned schema
Returns:
an ``OrderedDict`` mapping field names to field types<|endoftext|> |
f5eefed0f1792095b44444b185a5052dcc51b737ce7723119c8572bb0c0219f4 | def get_frame_field_schema(self, ftype=None, embedded_doc_type=None, include_private=False):
'Returns a schema dictionary describing the fields of the frames of\n the samples in the dataset.\n\n Only applicable for video datasets.\n\n Args:\n ftype (None): an optional field type to which to restrict the\n returned schema. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): an optional embedded document type to\n which to restrict the returned schema. Must be a subclass of\n :class:`fiftyone.core.odm.BaseEmbeddedDocument`\n include_private (False): whether to include fields that start with\n `_` in the returned schema\n\n Returns:\n a dictionary mapping field names to field types, or ``None`` if\n the dataset is not a video dataset\n '
if (self.media_type != fom.VIDEO):
return None
return self._frame_doc_cls.get_field_schema(ftype=ftype, embedded_doc_type=embedded_doc_type, include_private=include_private) | Returns a schema dictionary describing the fields of the frames of
the samples in the dataset.
Only applicable for video datasets.
Args:
ftype (None): an optional field type to which to restrict the
returned schema. Must be a subclass of
:class:`fiftyone.core.fields.Field`
embedded_doc_type (None): an optional embedded document type to
which to restrict the returned schema. Must be a subclass of
:class:`fiftyone.core.odm.BaseEmbeddedDocument`
include_private (False): whether to include fields that start with
`_` in the returned schema
Returns:
a dictionary mapping field names to field types, or ``None`` if
the dataset is not a video dataset | fiftyone/core/dataset.py | get_frame_field_schema | dadounhind/fiftyone | 1 | python | def get_frame_field_schema(self, ftype=None, embedded_doc_type=None, include_private=False):
'Returns a schema dictionary describing the fields of the frames of\n the samples in the dataset.\n\n Only applicable for video datasets.\n\n Args:\n ftype (None): an optional field type to which to restrict the\n returned schema. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): an optional embedded document type to\n which to restrict the returned schema. Must be a subclass of\n :class:`fiftyone.core.odm.BaseEmbeddedDocument`\n include_private (False): whether to include fields that start with\n `_` in the returned schema\n\n Returns:\n a dictionary mapping field names to field types, or ``None`` if\n the dataset is not a video dataset\n '
if (self.media_type != fom.VIDEO):
return None
return self._frame_doc_cls.get_field_schema(ftype=ftype, embedded_doc_type=embedded_doc_type, include_private=include_private) | def get_frame_field_schema(self, ftype=None, embedded_doc_type=None, include_private=False):
'Returns a schema dictionary describing the fields of the frames of\n the samples in the dataset.\n\n Only applicable for video datasets.\n\n Args:\n ftype (None): an optional field type to which to restrict the\n returned schema. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): an optional embedded document type to\n which to restrict the returned schema. Must be a subclass of\n :class:`fiftyone.core.odm.BaseEmbeddedDocument`\n include_private (False): whether to include fields that start with\n `_` in the returned schema\n\n Returns:\n a dictionary mapping field names to field types, or ``None`` if\n the dataset is not a video dataset\n '
if (self.media_type != fom.VIDEO):
return None
return self._frame_doc_cls.get_field_schema(ftype=ftype, embedded_doc_type=embedded_doc_type, include_private=include_private)<|docstring|>Returns a schema dictionary describing the fields of the frames of
the samples in the dataset.
Only applicable for video datasets.
Args:
ftype (None): an optional field type to which to restrict the
returned schema. Must be a subclass of
:class:`fiftyone.core.fields.Field`
embedded_doc_type (None): an optional embedded document type to
which to restrict the returned schema. Must be a subclass of
:class:`fiftyone.core.odm.BaseEmbeddedDocument`
include_private (False): whether to include fields that start with
`_` in the returned schema
Returns:
a dictionary mapping field names to field types, or ``None`` if
the dataset is not a video dataset<|endoftext|> |
c3b5ee831ee8a6c34aec7120c043a424660c8593681ef0131158cb54df0b351d | def add_sample_field(self, field_name, ftype, embedded_doc_type=None, subfield=None):
'Adds a new sample field to the dataset.\n\n Args:\n field_name: the field name\n ftype: the field type to create. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): the\n :class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the\n field. Used only when ``ftype`` is an embedded\n :class:`fiftyone.core.fields.EmbeddedDocumentField`\n subfield (None): the type of the contained field. Used only when\n ``ftype`` is a :class:`fiftyone.core.fields.ListField` or\n :class:`fiftyone.core.fields.DictField`\n '
self._sample_doc_cls.add_field(field_name, ftype, embedded_doc_type=embedded_doc_type, subfield=subfield) | Adds a new sample field to the dataset.
Args:
field_name: the field name
ftype: the field type to create. Must be a subclass of
:class:`fiftyone.core.fields.Field`
embedded_doc_type (None): the
:class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the
field. Used only when ``ftype`` is an embedded
:class:`fiftyone.core.fields.EmbeddedDocumentField`
subfield (None): the type of the contained field. Used only when
``ftype`` is a :class:`fiftyone.core.fields.ListField` or
:class:`fiftyone.core.fields.DictField` | fiftyone/core/dataset.py | add_sample_field | dadounhind/fiftyone | 1 | python | def add_sample_field(self, field_name, ftype, embedded_doc_type=None, subfield=None):
'Adds a new sample field to the dataset.\n\n Args:\n field_name: the field name\n ftype: the field type to create. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): the\n :class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the\n field. Used only when ``ftype`` is an embedded\n :class:`fiftyone.core.fields.EmbeddedDocumentField`\n subfield (None): the type of the contained field. Used only when\n ``ftype`` is a :class:`fiftyone.core.fields.ListField` or\n :class:`fiftyone.core.fields.DictField`\n '
self._sample_doc_cls.add_field(field_name, ftype, embedded_doc_type=embedded_doc_type, subfield=subfield) | def add_sample_field(self, field_name, ftype, embedded_doc_type=None, subfield=None):
'Adds a new sample field to the dataset.\n\n Args:\n field_name: the field name\n ftype: the field type to create. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): the\n :class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the\n field. Used only when ``ftype`` is an embedded\n :class:`fiftyone.core.fields.EmbeddedDocumentField`\n subfield (None): the type of the contained field. Used only when\n ``ftype`` is a :class:`fiftyone.core.fields.ListField` or\n :class:`fiftyone.core.fields.DictField`\n '
self._sample_doc_cls.add_field(field_name, ftype, embedded_doc_type=embedded_doc_type, subfield=subfield)<|docstring|>Adds a new sample field to the dataset.
Args:
field_name: the field name
ftype: the field type to create. Must be a subclass of
:class:`fiftyone.core.fields.Field`
embedded_doc_type (None): the
:class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the
field. Used only when ``ftype`` is an embedded
:class:`fiftyone.core.fields.EmbeddedDocumentField`
subfield (None): the type of the contained field. Used only when
``ftype`` is a :class:`fiftyone.core.fields.ListField` or
:class:`fiftyone.core.fields.DictField`<|endoftext|> |
55e94443cc10da7494dd4e7025b722dc0b3214fa00a58c70a8542f94ea418c51 | def add_frame_field(self, field_name, ftype, embedded_doc_type=None, subfield=None):
'Adds a new frame-level field to the dataset.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name\n ftype: the field type to create. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): the\n :class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the\n field. Used only when ``ftype`` is an embedded\n :class:`fiftyone.core.fields.EmbeddedDocumentField`\n subfield (None): the type of the contained field. Used only when\n ``ftype`` is a :class:`fiftyone.core.fields.ListField` or\n :class:`fiftyone.core.fields.DictField`\n '
if (self.media_type != fom.VIDEO):
raise ValueError('Only video datasets have frame fields')
self._frame_doc_cls.add_field(field_name, ftype, embedded_doc_type=embedded_doc_type, subfield=subfield) | Adds a new frame-level field to the dataset.
Only applicable to video datasets.
Args:
field_name: the field name
ftype: the field type to create. Must be a subclass of
:class:`fiftyone.core.fields.Field`
embedded_doc_type (None): the
:class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the
field. Used only when ``ftype`` is an embedded
:class:`fiftyone.core.fields.EmbeddedDocumentField`
subfield (None): the type of the contained field. Used only when
``ftype`` is a :class:`fiftyone.core.fields.ListField` or
:class:`fiftyone.core.fields.DictField` | fiftyone/core/dataset.py | add_frame_field | dadounhind/fiftyone | 1 | python | def add_frame_field(self, field_name, ftype, embedded_doc_type=None, subfield=None):
'Adds a new frame-level field to the dataset.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name\n ftype: the field type to create. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): the\n :class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the\n field. Used only when ``ftype`` is an embedded\n :class:`fiftyone.core.fields.EmbeddedDocumentField`\n subfield (None): the type of the contained field. Used only when\n ``ftype`` is a :class:`fiftyone.core.fields.ListField` or\n :class:`fiftyone.core.fields.DictField`\n '
if (self.media_type != fom.VIDEO):
raise ValueError('Only video datasets have frame fields')
self._frame_doc_cls.add_field(field_name, ftype, embedded_doc_type=embedded_doc_type, subfield=subfield) | def add_frame_field(self, field_name, ftype, embedded_doc_type=None, subfield=None):
'Adds a new frame-level field to the dataset.\n\n Only applicable to video datasets.\n\n Args:\n field_name: the field name\n ftype: the field type to create. Must be a subclass of\n :class:`fiftyone.core.fields.Field`\n embedded_doc_type (None): the\n :class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the\n field. Used only when ``ftype`` is an embedded\n :class:`fiftyone.core.fields.EmbeddedDocumentField`\n subfield (None): the type of the contained field. Used only when\n ``ftype`` is a :class:`fiftyone.core.fields.ListField` or\n :class:`fiftyone.core.fields.DictField`\n '
if (self.media_type != fom.VIDEO):
raise ValueError('Only video datasets have frame fields')
self._frame_doc_cls.add_field(field_name, ftype, embedded_doc_type=embedded_doc_type, subfield=subfield)<|docstring|>Adds a new frame-level field to the dataset.
Only applicable to video datasets.
Args:
field_name: the field name
ftype: the field type to create. Must be a subclass of
:class:`fiftyone.core.fields.Field`
embedded_doc_type (None): the
:class:`fiftyone.core.odm.BaseEmbeddedDocument` type of the
field. Used only when ``ftype`` is an embedded
:class:`fiftyone.core.fields.EmbeddedDocumentField`
subfield (None): the type of the contained field. Used only when
``ftype`` is a :class:`fiftyone.core.fields.ListField` or
:class:`fiftyone.core.fields.DictField`<|endoftext|> |
a4b53d9404f9acbdaccd649ee5b62d61b288a4f3b9d9643a0d27b28ef56378da | def rename_sample_field(self, field_name, new_field_name):
'Renames the sample field to the given new name.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._rename_sample_fields({field_name: new_field_name}) | Renames the sample field to the given new name.
You can use dot notation (``embedded.field.name``) to rename embedded
fields.
Args:
field_name: the field name or ``embedded.field.name``
new_field_name: the new field name or ``embedded.field.name`` | fiftyone/core/dataset.py | rename_sample_field | dadounhind/fiftyone | 1 | python | def rename_sample_field(self, field_name, new_field_name):
'Renames the sample field to the given new name.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._rename_sample_fields({field_name: new_field_name}) | def rename_sample_field(self, field_name, new_field_name):
'Renames the sample field to the given new name.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n fields.\n\n Args:\n field_name: the field name or ``embedded.field.name``\n new_field_name: the new field name or ``embedded.field.name``\n '
self._rename_sample_fields({field_name: new_field_name})<|docstring|>Renames the sample field to the given new name.
You can use dot notation (``embedded.field.name``) to rename embedded
fields.
Args:
field_name: the field name or ``embedded.field.name``
new_field_name: the new field name or ``embedded.field.name``<|endoftext|> |
0665255f59e2495a59568ce46ee74ce898d21a615276284d033b09e18541b218 | def rename_sample_fields(self, field_mapping):
'Renames the sample fields to the given new names.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n fields.\n\n Args:\n field_mapping: a dict mapping field names to new field names\n '
self._rename_sample_fields(field_mapping) | Renames the sample fields to the given new names.
You can use dot notation (``embedded.field.name``) to rename embedded
fields.
Args:
field_mapping: a dict mapping field names to new field names | fiftyone/core/dataset.py | rename_sample_fields | dadounhind/fiftyone | 1 | python | def rename_sample_fields(self, field_mapping):
'Renames the sample fields to the given new names.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n fields.\n\n Args:\n field_mapping: a dict mapping field names to new field names\n '
self._rename_sample_fields(field_mapping) | def rename_sample_fields(self, field_mapping):
'Renames the sample fields to the given new names.\n\n You can use dot notation (``embedded.field.name``) to rename embedded\n fields.\n\n Args:\n field_mapping: a dict mapping field names to new field names\n '
self._rename_sample_fields(field_mapping)<|docstring|>Renames the sample fields to the given new names.
You can use dot notation (``embedded.field.name``) to rename embedded
fields.
Args:
field_mapping: a dict mapping field names to new field names<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.