body_hash
stringlengths 64
64
| body
stringlengths 23
109k
| docstring
stringlengths 1
57k
| path
stringlengths 4
198
| name
stringlengths 1
115
| repository_name
stringlengths 7
111
| repository_stars
float64 0
191k
| lang
stringclasses 1
value | body_without_docstring
stringlengths 14
108k
| unified
stringlengths 45
133k
|
---|---|---|---|---|---|---|---|---|---|
4e0c1b5a8046000c11867e7d8cb36485265a1d48bb922bba7efece4ac898766c | def _calc_d(aod700, p):
'Calculate the d coefficient.'
p0 = 101325.0
dp = (1 / (18 + (152 * aod700)))
d = (((((- 0.337) * (aod700 ** 2)) + (0.63 * aod700)) + 0.116) + (dp * np.log((p / p0))))
return d | Calculate the d coefficient. | pvlib/clearsky.py | _calc_d | Antoine-0/pvlib-python | 695 | python | def _calc_d(aod700, p):
p0 = 101325.0
dp = (1 / (18 + (152 * aod700)))
d = (((((- 0.337) * (aod700 ** 2)) + (0.63 * aod700)) + 0.116) + (dp * np.log((p / p0))))
return d | def _calc_d(aod700, p):
p0 = 101325.0
dp = (1 / (18 + (152 * aod700)))
d = (((((- 0.337) * (aod700 ** 2)) + (0.63 * aod700)) + 0.116) + (dp * np.log((p / p0))))
return d<|docstring|>Calculate the d coefficient.<|endoftext|> |
5f1575b66f00f0bf9083ae53cf35bbd0f19a97eae4f969a0b32d891a426e4fb6 | def _calc_stats(data, samples_per_window, sample_interval, H):
' Calculates statistics for each window, used by Reno-style clear\n sky detection functions. Does not return the line length statistic\n which is provided by _calc_windowed_stat and _line_length.\n\n Calculations are done on a sliding window defined by the Hankel matrix H.\n Columns in H define the indices for each window. Each window contains\n samples_per_window index values. The first window starts with index 0;\n the last window ends at the last index position in data.\n\n In the calculation of data_slope_nstd, a choice is made here where [1]_ is\n ambiguous. data_slope_nstd is the standard deviation of slopes divided by\n the mean GHI for each interval; see [1]_ Eq. 11. For intervals containing\n e.g. 10 values, there are 9 slope values in the standard deviation, and the\n mean is calculated using all 10 values. Eq. 11 in [1]_ is ambiguous if\n the mean should be calculated using 9 points (left ends of each slope)\n or all 10 points.\n\n Parameters\n ----------\n data : Series\n samples_per_window : int\n Number of data points in each window\n sample_interval : float\n Time in minutes in each sample interval\n H : 2D ndarray\n Hankel matrix defining the indices for each window.\n\n Returns\n -------\n data_mean : Series\n mean of data in each window\n data_max : Series\n maximum of data in each window\n data_slope_nstd : Series\n standard deviation of difference between data points in each window\n data_slope : Series\n difference between successive data points\n\n References\n ----------\n .. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear\n sky irradiance in time series of GHI measurements" Renewable Energy,\n v90, p. 520-531, 2016.\n '
data_mean = data.values[H].mean(axis=0)
data_mean = _to_centered_series(data_mean, data.index, samples_per_window)
data_max = data.values[H].max(axis=0)
data_max = _to_centered_series(data_max, data.index, samples_per_window)
data_diff = data.diff().shift((- 1))
data_slope = (data_diff / sample_interval)
data_slope_nstd = _slope_nstd_windowed(data_slope.values[:(- 1)], data, H, samples_per_window, sample_interval)
data_slope_nstd = data_slope_nstd
return (data_mean, data_max, data_slope_nstd, data_slope) | Calculates statistics for each window, used by Reno-style clear
sky detection functions. Does not return the line length statistic
which is provided by _calc_windowed_stat and _line_length.
Calculations are done on a sliding window defined by the Hankel matrix H.
Columns in H define the indices for each window. Each window contains
samples_per_window index values. The first window starts with index 0;
the last window ends at the last index position in data.
In the calculation of data_slope_nstd, a choice is made here where [1]_ is
ambiguous. data_slope_nstd is the standard deviation of slopes divided by
the mean GHI for each interval; see [1]_ Eq. 11. For intervals containing
e.g. 10 values, there are 9 slope values in the standard deviation, and the
mean is calculated using all 10 values. Eq. 11 in [1]_ is ambiguous if
the mean should be calculated using 9 points (left ends of each slope)
or all 10 points.
Parameters
----------
data : Series
samples_per_window : int
Number of data points in each window
sample_interval : float
Time in minutes in each sample interval
H : 2D ndarray
Hankel matrix defining the indices for each window.
Returns
-------
data_mean : Series
mean of data in each window
data_max : Series
maximum of data in each window
data_slope_nstd : Series
standard deviation of difference between data points in each window
data_slope : Series
difference between successive data points
References
----------
.. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear
sky irradiance in time series of GHI measurements" Renewable Energy,
v90, p. 520-531, 2016. | pvlib/clearsky.py | _calc_stats | Antoine-0/pvlib-python | 695 | python | def _calc_stats(data, samples_per_window, sample_interval, H):
' Calculates statistics for each window, used by Reno-style clear\n sky detection functions. Does not return the line length statistic\n which is provided by _calc_windowed_stat and _line_length.\n\n Calculations are done on a sliding window defined by the Hankel matrix H.\n Columns in H define the indices for each window. Each window contains\n samples_per_window index values. The first window starts with index 0;\n the last window ends at the last index position in data.\n\n In the calculation of data_slope_nstd, a choice is made here where [1]_ is\n ambiguous. data_slope_nstd is the standard deviation of slopes divided by\n the mean GHI for each interval; see [1]_ Eq. 11. For intervals containing\n e.g. 10 values, there are 9 slope values in the standard deviation, and the\n mean is calculated using all 10 values. Eq. 11 in [1]_ is ambiguous if\n the mean should be calculated using 9 points (left ends of each slope)\n or all 10 points.\n\n Parameters\n ----------\n data : Series\n samples_per_window : int\n Number of data points in each window\n sample_interval : float\n Time in minutes in each sample interval\n H : 2D ndarray\n Hankel matrix defining the indices for each window.\n\n Returns\n -------\n data_mean : Series\n mean of data in each window\n data_max : Series\n maximum of data in each window\n data_slope_nstd : Series\n standard deviation of difference between data points in each window\n data_slope : Series\n difference between successive data points\n\n References\n ----------\n .. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear\n sky irradiance in time series of GHI measurements" Renewable Energy,\n v90, p. 520-531, 2016.\n '
data_mean = data.values[H].mean(axis=0)
data_mean = _to_centered_series(data_mean, data.index, samples_per_window)
data_max = data.values[H].max(axis=0)
data_max = _to_centered_series(data_max, data.index, samples_per_window)
data_diff = data.diff().shift((- 1))
data_slope = (data_diff / sample_interval)
data_slope_nstd = _slope_nstd_windowed(data_slope.values[:(- 1)], data, H, samples_per_window, sample_interval)
data_slope_nstd = data_slope_nstd
return (data_mean, data_max, data_slope_nstd, data_slope) | def _calc_stats(data, samples_per_window, sample_interval, H):
' Calculates statistics for each window, used by Reno-style clear\n sky detection functions. Does not return the line length statistic\n which is provided by _calc_windowed_stat and _line_length.\n\n Calculations are done on a sliding window defined by the Hankel matrix H.\n Columns in H define the indices for each window. Each window contains\n samples_per_window index values. The first window starts with index 0;\n the last window ends at the last index position in data.\n\n In the calculation of data_slope_nstd, a choice is made here where [1]_ is\n ambiguous. data_slope_nstd is the standard deviation of slopes divided by\n the mean GHI for each interval; see [1]_ Eq. 11. For intervals containing\n e.g. 10 values, there are 9 slope values in the standard deviation, and the\n mean is calculated using all 10 values. Eq. 11 in [1]_ is ambiguous if\n the mean should be calculated using 9 points (left ends of each slope)\n or all 10 points.\n\n Parameters\n ----------\n data : Series\n samples_per_window : int\n Number of data points in each window\n sample_interval : float\n Time in minutes in each sample interval\n H : 2D ndarray\n Hankel matrix defining the indices for each window.\n\n Returns\n -------\n data_mean : Series\n mean of data in each window\n data_max : Series\n maximum of data in each window\n data_slope_nstd : Series\n standard deviation of difference between data points in each window\n data_slope : Series\n difference between successive data points\n\n References\n ----------\n .. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear\n sky irradiance in time series of GHI measurements" Renewable Energy,\n v90, p. 520-531, 2016.\n '
data_mean = data.values[H].mean(axis=0)
data_mean = _to_centered_series(data_mean, data.index, samples_per_window)
data_max = data.values[H].max(axis=0)
data_max = _to_centered_series(data_max, data.index, samples_per_window)
data_diff = data.diff().shift((- 1))
data_slope = (data_diff / sample_interval)
data_slope_nstd = _slope_nstd_windowed(data_slope.values[:(- 1)], data, H, samples_per_window, sample_interval)
data_slope_nstd = data_slope_nstd
return (data_mean, data_max, data_slope_nstd, data_slope)<|docstring|>Calculates statistics for each window, used by Reno-style clear
sky detection functions. Does not return the line length statistic
which is provided by _calc_windowed_stat and _line_length.
Calculations are done on a sliding window defined by the Hankel matrix H.
Columns in H define the indices for each window. Each window contains
samples_per_window index values. The first window starts with index 0;
the last window ends at the last index position in data.
In the calculation of data_slope_nstd, a choice is made here where [1]_ is
ambiguous. data_slope_nstd is the standard deviation of slopes divided by
the mean GHI for each interval; see [1]_ Eq. 11. For intervals containing
e.g. 10 values, there are 9 slope values in the standard deviation, and the
mean is calculated using all 10 values. Eq. 11 in [1]_ is ambiguous if
the mean should be calculated using 9 points (left ends of each slope)
or all 10 points.
Parameters
----------
data : Series
samples_per_window : int
Number of data points in each window
sample_interval : float
Time in minutes in each sample interval
H : 2D ndarray
Hankel matrix defining the indices for each window.
Returns
-------
data_mean : Series
mean of data in each window
data_max : Series
maximum of data in each window
data_slope_nstd : Series
standard deviation of difference between data points in each window
data_slope : Series
difference between successive data points
References
----------
.. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear
sky irradiance in time series of GHI measurements" Renewable Energy,
v90, p. 520-531, 2016.<|endoftext|> |
ac19757181e5432eb7b7e147683e42b285c0c68e0d598bd654eb1090fa19d2cc | def _get_sample_intervals(times, win_length):
' Calculates time interval and samples per window for Reno-style clear\n sky detection functions\n '
deltas = (np.diff(times.values) / np.timedelta64(1, '60s'))
if (times.inferred_freq and (len(np.unique(deltas)) == 1)):
sample_interval = (times[1] - times[0])
sample_interval = (sample_interval.seconds / 60)
samples_per_window = int((win_length / sample_interval))
return (sample_interval, samples_per_window)
else:
raise NotImplementedError('algorithm does not yet support unequal times. consider resampling your data.') | Calculates time interval and samples per window for Reno-style clear
sky detection functions | pvlib/clearsky.py | _get_sample_intervals | Antoine-0/pvlib-python | 695 | python | def _get_sample_intervals(times, win_length):
' Calculates time interval and samples per window for Reno-style clear\n sky detection functions\n '
deltas = (np.diff(times.values) / np.timedelta64(1, '60s'))
if (times.inferred_freq and (len(np.unique(deltas)) == 1)):
sample_interval = (times[1] - times[0])
sample_interval = (sample_interval.seconds / 60)
samples_per_window = int((win_length / sample_interval))
return (sample_interval, samples_per_window)
else:
raise NotImplementedError('algorithm does not yet support unequal times. consider resampling your data.') | def _get_sample_intervals(times, win_length):
' Calculates time interval and samples per window for Reno-style clear\n sky detection functions\n '
deltas = (np.diff(times.values) / np.timedelta64(1, '60s'))
if (times.inferred_freq and (len(np.unique(deltas)) == 1)):
sample_interval = (times[1] - times[0])
sample_interval = (sample_interval.seconds / 60)
samples_per_window = int((win_length / sample_interval))
return (sample_interval, samples_per_window)
else:
raise NotImplementedError('algorithm does not yet support unequal times. consider resampling your data.')<|docstring|>Calculates time interval and samples per window for Reno-style clear
sky detection functions<|endoftext|> |
1b86657666db99419d1e6a9d164738d0786ba29e321aa9634e1cbc4f74bb316e | def _clear_sample_index(clear_windows, samples_per_window, align, H):
'\n Returns indices of clear samples in clear windows\n '
shift = (- (samples_per_window // 2))
idx = clear_windows.shift(shift)
idx = idx.drop(clear_windows.index[(1 - samples_per_window):])
idx = idx.astype(bool)
clear_samples = np.unique(H[(:, idx)])
return clear_samples | Returns indices of clear samples in clear windows | pvlib/clearsky.py | _clear_sample_index | Antoine-0/pvlib-python | 695 | python | def _clear_sample_index(clear_windows, samples_per_window, align, H):
'\n \n '
shift = (- (samples_per_window // 2))
idx = clear_windows.shift(shift)
idx = idx.drop(clear_windows.index[(1 - samples_per_window):])
idx = idx.astype(bool)
clear_samples = np.unique(H[(:, idx)])
return clear_samples | def _clear_sample_index(clear_windows, samples_per_window, align, H):
'\n \n '
shift = (- (samples_per_window // 2))
idx = clear_windows.shift(shift)
idx = idx.drop(clear_windows.index[(1 - samples_per_window):])
idx = idx.astype(bool)
clear_samples = np.unique(H[(:, idx)])
return clear_samples<|docstring|>Returns indices of clear samples in clear windows<|endoftext|> |
ece46bdeb4ddf6d4e8745ae9756bf9b15c6d78345520311c48a26b55ba5b3540 | def detect_clearsky(measured, clearsky, times=None, window_length=10, mean_diff=75, max_diff=75, lower_line_length=(- 5), upper_line_length=10, var_diff=0.005, slope_dev=8, max_iterations=20, return_components=False):
'\n Detects clear sky times according to the algorithm developed by Reno\n and Hansen for GHI measurements. The algorithm [1]_ was designed and\n validated for analyzing GHI time series only. Users may attempt to\n apply it to other types of time series data using different filter\n settings, but should be skeptical of the results.\n\n The algorithm detects clear sky times by comparing statistics for a\n measured time series and an expected clearsky time series.\n Statistics are calculated using a sliding time window (e.g., 10\n minutes). An iterative algorithm identifies clear periods, uses the\n identified periods to estimate bias in the clearsky data, scales the\n clearsky data and repeats.\n\n Clear times are identified by meeting 5 criteria. Default values for\n these thresholds are appropriate for 10 minute windows of 1 minute\n GHI data.\n\n Parameters\n ----------\n measured : array or Series\n Time series of measured GHI. [W/m2]\n clearsky : array or Series\n Time series of the expected clearsky GHI. [W/m2]\n times : DatetimeIndex or None, default None.\n Times of measured and clearsky values. If None the index of measured\n will be used.\n window_length : int, default 10\n Length of sliding time window in minutes. Must be greater than 2\n periods.\n mean_diff : float, default 75\n Threshold value for agreement between mean values of measured\n and clearsky in each interval, see Eq. 6 in [1]. [W/m2]\n max_diff : float, default 75\n Threshold value for agreement between maxima of measured and\n clearsky values in each interval, see Eq. 7 in [1]. [W/m2]\n lower_line_length : float, default -5\n Lower limit of line length criterion from Eq. 8 in [1].\n Criterion satisfied when lower_line_length < line length difference\n < upper_line_length.\n upper_line_length : float, default 10\n Upper limit of line length criterion from Eq. 8 in [1].\n var_diff : float, default 0.005\n Threshold value in Hz for the agreement between normalized\n standard deviations of rate of change in irradiance, see Eqs. 9\n through 11 in [1].\n slope_dev : float, default 8\n Threshold value for agreement between the largest magnitude of\n change in successive values, see Eqs. 12 through 14 in [1].\n max_iterations : int, default 20\n Maximum number of times to apply a different scaling factor to\n the clearsky and redetermine clear_samples. Must be 1 or larger.\n return_components : bool, default False\n Controls if additional output should be returned. See below.\n\n Returns\n -------\n clear_samples : array or Series\n Boolean array or Series of whether or not the given time is\n clear. Return type is the same as the input type.\n\n components : OrderedDict, optional\n Dict of arrays of whether or not the given time window is clear\n for each condition. Only provided if return_components is True.\n\n alpha : scalar, optional\n Scaling factor applied to the clearsky_ghi to obtain the\n detected clear_samples. Only provided if return_components is\n True.\n\n Raises\n ------\n ValueError\n If measured is not a Series and times is not provided\n NotImplementedError\n If timestamps are not equally spaced\n\n References\n ----------\n .. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear\n sky irradiance in time series of GHI measurements" Renewable Energy,\n v90, p. 520-531, 2016.\n\n Notes\n -----\n Initial implementation in MATLAB by Matthew Reno. Modifications for\n computational efficiency by Joshua Patrick and Curtis Martin. Ported\n to Python by Will Holmgren, Tony Lorenzo, and Cliff Hansen.\n\n Differences from MATLAB version:\n\n * no support for unequal times\n * automatically determines sample_interval\n * requires a reference clear sky series instead calculating one\n from a user supplied location and UTCoffset\n * parameters are controllable via keyword arguments\n * option to return individual test components and clearsky scaling\n parameter\n * uses centered windows (Matlab function uses left-aligned windows)\n '
if (times is None):
try:
times = measured.index
except AttributeError:
raise ValueError('times is required when measured is not a Series')
ispandas = isinstance(measured, pd.Series)
if (not ispandas):
meas = pd.Series(measured, index=times)
else:
meas = measured
if (not isinstance(clearsky, pd.Series)):
clear = pd.Series(clearsky, index=times)
else:
clear = clearsky
(sample_interval, samples_per_window) = _get_sample_intervals(times, window_length)
H = hankel(np.arange(samples_per_window), np.arange((samples_per_window - 1), len(times)))
(meas_mean, meas_max, meas_slope_nstd, meas_slope) = _calc_stats(meas, samples_per_window, sample_interval, H)
meas_line_length = _line_length_windowed(meas, H, samples_per_window, sample_interval)
(clear_mean, clear_max, _, clear_slope) = _calc_stats(clear, samples_per_window, sample_interval, H)
alpha = 1
for iteration in range(max_iterations):
scaled_clear = (alpha * clear)
clear_line_length = _line_length_windowed(scaled_clear, H, samples_per_window, sample_interval)
line_diff = (meas_line_length - clear_line_length)
slope_max_diff = _max_diff_windowed((meas - scaled_clear), H, samples_per_window)
c1 = (np.abs((meas_mean - (alpha * clear_mean))) < mean_diff)
c2 = (np.abs((meas_max - (alpha * clear_max))) < max_diff)
c3 = ((line_diff > lower_line_length) & (line_diff < upper_line_length))
c4 = (meas_slope_nstd < var_diff)
c5 = (slope_max_diff < slope_dev)
c6 = ((clear_mean != 0) & (~ np.isnan(clear_mean)))
clear_windows = (((((c1 & c2) & c3) & c4) & c5) & c6)
clear_samples = np.full_like(meas, False, dtype='bool')
idx = _clear_sample_index(clear_windows, samples_per_window, 'center', H)
clear_samples[idx] = True
previous_alpha = alpha
clear_meas = meas[clear_samples]
clear_clear = clear[clear_samples]
def rmse(alpha):
return np.sqrt(np.mean(((clear_meas - (alpha * clear_clear)) ** 2)))
alpha = minimize_scalar(rmse).x
if (round((alpha * 10000)) == round((previous_alpha * 10000))):
break
else:
import warnings
warnings.warn(('rescaling failed to converge after %s iterations' % max_iterations), RuntimeWarning)
if ispandas:
clear_samples = pd.Series(clear_samples, index=times)
if return_components:
components = OrderedDict()
components['mean_diff_flag'] = c1
components['max_diff_flag'] = c2
components['line_length_flag'] = c3
components['slope_nstd_flag'] = c4
components['slope_max_flag'] = c5
components['mean_nan_flag'] = c6
components['windows'] = clear_windows
components['mean_diff'] = np.abs((meas_mean - (alpha * clear_mean)))
components['max_diff'] = np.abs((meas_max - (alpha * clear_max)))
components['line_length'] = (meas_line_length - clear_line_length)
components['slope_nstd'] = meas_slope_nstd
components['slope_max'] = slope_max_diff
return (clear_samples, components, alpha)
else:
return clear_samples | Detects clear sky times according to the algorithm developed by Reno
and Hansen for GHI measurements. The algorithm [1]_ was designed and
validated for analyzing GHI time series only. Users may attempt to
apply it to other types of time series data using different filter
settings, but should be skeptical of the results.
The algorithm detects clear sky times by comparing statistics for a
measured time series and an expected clearsky time series.
Statistics are calculated using a sliding time window (e.g., 10
minutes). An iterative algorithm identifies clear periods, uses the
identified periods to estimate bias in the clearsky data, scales the
clearsky data and repeats.
Clear times are identified by meeting 5 criteria. Default values for
these thresholds are appropriate for 10 minute windows of 1 minute
GHI data.
Parameters
----------
measured : array or Series
Time series of measured GHI. [W/m2]
clearsky : array or Series
Time series of the expected clearsky GHI. [W/m2]
times : DatetimeIndex or None, default None.
Times of measured and clearsky values. If None the index of measured
will be used.
window_length : int, default 10
Length of sliding time window in minutes. Must be greater than 2
periods.
mean_diff : float, default 75
Threshold value for agreement between mean values of measured
and clearsky in each interval, see Eq. 6 in [1]. [W/m2]
max_diff : float, default 75
Threshold value for agreement between maxima of measured and
clearsky values in each interval, see Eq. 7 in [1]. [W/m2]
lower_line_length : float, default -5
Lower limit of line length criterion from Eq. 8 in [1].
Criterion satisfied when lower_line_length < line length difference
< upper_line_length.
upper_line_length : float, default 10
Upper limit of line length criterion from Eq. 8 in [1].
var_diff : float, default 0.005
Threshold value in Hz for the agreement between normalized
standard deviations of rate of change in irradiance, see Eqs. 9
through 11 in [1].
slope_dev : float, default 8
Threshold value for agreement between the largest magnitude of
change in successive values, see Eqs. 12 through 14 in [1].
max_iterations : int, default 20
Maximum number of times to apply a different scaling factor to
the clearsky and redetermine clear_samples. Must be 1 or larger.
return_components : bool, default False
Controls if additional output should be returned. See below.
Returns
-------
clear_samples : array or Series
Boolean array or Series of whether or not the given time is
clear. Return type is the same as the input type.
components : OrderedDict, optional
Dict of arrays of whether or not the given time window is clear
for each condition. Only provided if return_components is True.
alpha : scalar, optional
Scaling factor applied to the clearsky_ghi to obtain the
detected clear_samples. Only provided if return_components is
True.
Raises
------
ValueError
If measured is not a Series and times is not provided
NotImplementedError
If timestamps are not equally spaced
References
----------
.. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear
sky irradiance in time series of GHI measurements" Renewable Energy,
v90, p. 520-531, 2016.
Notes
-----
Initial implementation in MATLAB by Matthew Reno. Modifications for
computational efficiency by Joshua Patrick and Curtis Martin. Ported
to Python by Will Holmgren, Tony Lorenzo, and Cliff Hansen.
Differences from MATLAB version:
* no support for unequal times
* automatically determines sample_interval
* requires a reference clear sky series instead calculating one
from a user supplied location and UTCoffset
* parameters are controllable via keyword arguments
* option to return individual test components and clearsky scaling
parameter
* uses centered windows (Matlab function uses left-aligned windows) | pvlib/clearsky.py | detect_clearsky | Antoine-0/pvlib-python | 695 | python | def detect_clearsky(measured, clearsky, times=None, window_length=10, mean_diff=75, max_diff=75, lower_line_length=(- 5), upper_line_length=10, var_diff=0.005, slope_dev=8, max_iterations=20, return_components=False):
'\n Detects clear sky times according to the algorithm developed by Reno\n and Hansen for GHI measurements. The algorithm [1]_ was designed and\n validated for analyzing GHI time series only. Users may attempt to\n apply it to other types of time series data using different filter\n settings, but should be skeptical of the results.\n\n The algorithm detects clear sky times by comparing statistics for a\n measured time series and an expected clearsky time series.\n Statistics are calculated using a sliding time window (e.g., 10\n minutes). An iterative algorithm identifies clear periods, uses the\n identified periods to estimate bias in the clearsky data, scales the\n clearsky data and repeats.\n\n Clear times are identified by meeting 5 criteria. Default values for\n these thresholds are appropriate for 10 minute windows of 1 minute\n GHI data.\n\n Parameters\n ----------\n measured : array or Series\n Time series of measured GHI. [W/m2]\n clearsky : array or Series\n Time series of the expected clearsky GHI. [W/m2]\n times : DatetimeIndex or None, default None.\n Times of measured and clearsky values. If None the index of measured\n will be used.\n window_length : int, default 10\n Length of sliding time window in minutes. Must be greater than 2\n periods.\n mean_diff : float, default 75\n Threshold value for agreement between mean values of measured\n and clearsky in each interval, see Eq. 6 in [1]. [W/m2]\n max_diff : float, default 75\n Threshold value for agreement between maxima of measured and\n clearsky values in each interval, see Eq. 7 in [1]. [W/m2]\n lower_line_length : float, default -5\n Lower limit of line length criterion from Eq. 8 in [1].\n Criterion satisfied when lower_line_length < line length difference\n < upper_line_length.\n upper_line_length : float, default 10\n Upper limit of line length criterion from Eq. 8 in [1].\n var_diff : float, default 0.005\n Threshold value in Hz for the agreement between normalized\n standard deviations of rate of change in irradiance, see Eqs. 9\n through 11 in [1].\n slope_dev : float, default 8\n Threshold value for agreement between the largest magnitude of\n change in successive values, see Eqs. 12 through 14 in [1].\n max_iterations : int, default 20\n Maximum number of times to apply a different scaling factor to\n the clearsky and redetermine clear_samples. Must be 1 or larger.\n return_components : bool, default False\n Controls if additional output should be returned. See below.\n\n Returns\n -------\n clear_samples : array or Series\n Boolean array or Series of whether or not the given time is\n clear. Return type is the same as the input type.\n\n components : OrderedDict, optional\n Dict of arrays of whether or not the given time window is clear\n for each condition. Only provided if return_components is True.\n\n alpha : scalar, optional\n Scaling factor applied to the clearsky_ghi to obtain the\n detected clear_samples. Only provided if return_components is\n True.\n\n Raises\n ------\n ValueError\n If measured is not a Series and times is not provided\n NotImplementedError\n If timestamps are not equally spaced\n\n References\n ----------\n .. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear\n sky irradiance in time series of GHI measurements" Renewable Energy,\n v90, p. 520-531, 2016.\n\n Notes\n -----\n Initial implementation in MATLAB by Matthew Reno. Modifications for\n computational efficiency by Joshua Patrick and Curtis Martin. Ported\n to Python by Will Holmgren, Tony Lorenzo, and Cliff Hansen.\n\n Differences from MATLAB version:\n\n * no support for unequal times\n * automatically determines sample_interval\n * requires a reference clear sky series instead calculating one\n from a user supplied location and UTCoffset\n * parameters are controllable via keyword arguments\n * option to return individual test components and clearsky scaling\n parameter\n * uses centered windows (Matlab function uses left-aligned windows)\n '
if (times is None):
try:
times = measured.index
except AttributeError:
raise ValueError('times is required when measured is not a Series')
ispandas = isinstance(measured, pd.Series)
if (not ispandas):
meas = pd.Series(measured, index=times)
else:
meas = measured
if (not isinstance(clearsky, pd.Series)):
clear = pd.Series(clearsky, index=times)
else:
clear = clearsky
(sample_interval, samples_per_window) = _get_sample_intervals(times, window_length)
H = hankel(np.arange(samples_per_window), np.arange((samples_per_window - 1), len(times)))
(meas_mean, meas_max, meas_slope_nstd, meas_slope) = _calc_stats(meas, samples_per_window, sample_interval, H)
meas_line_length = _line_length_windowed(meas, H, samples_per_window, sample_interval)
(clear_mean, clear_max, _, clear_slope) = _calc_stats(clear, samples_per_window, sample_interval, H)
alpha = 1
for iteration in range(max_iterations):
scaled_clear = (alpha * clear)
clear_line_length = _line_length_windowed(scaled_clear, H, samples_per_window, sample_interval)
line_diff = (meas_line_length - clear_line_length)
slope_max_diff = _max_diff_windowed((meas - scaled_clear), H, samples_per_window)
c1 = (np.abs((meas_mean - (alpha * clear_mean))) < mean_diff)
c2 = (np.abs((meas_max - (alpha * clear_max))) < max_diff)
c3 = ((line_diff > lower_line_length) & (line_diff < upper_line_length))
c4 = (meas_slope_nstd < var_diff)
c5 = (slope_max_diff < slope_dev)
c6 = ((clear_mean != 0) & (~ np.isnan(clear_mean)))
clear_windows = (((((c1 & c2) & c3) & c4) & c5) & c6)
clear_samples = np.full_like(meas, False, dtype='bool')
idx = _clear_sample_index(clear_windows, samples_per_window, 'center', H)
clear_samples[idx] = True
previous_alpha = alpha
clear_meas = meas[clear_samples]
clear_clear = clear[clear_samples]
def rmse(alpha):
return np.sqrt(np.mean(((clear_meas - (alpha * clear_clear)) ** 2)))
alpha = minimize_scalar(rmse).x
if (round((alpha * 10000)) == round((previous_alpha * 10000))):
break
else:
import warnings
warnings.warn(('rescaling failed to converge after %s iterations' % max_iterations), RuntimeWarning)
if ispandas:
clear_samples = pd.Series(clear_samples, index=times)
if return_components:
components = OrderedDict()
components['mean_diff_flag'] = c1
components['max_diff_flag'] = c2
components['line_length_flag'] = c3
components['slope_nstd_flag'] = c4
components['slope_max_flag'] = c5
components['mean_nan_flag'] = c6
components['windows'] = clear_windows
components['mean_diff'] = np.abs((meas_mean - (alpha * clear_mean)))
components['max_diff'] = np.abs((meas_max - (alpha * clear_max)))
components['line_length'] = (meas_line_length - clear_line_length)
components['slope_nstd'] = meas_slope_nstd
components['slope_max'] = slope_max_diff
return (clear_samples, components, alpha)
else:
return clear_samples | def detect_clearsky(measured, clearsky, times=None, window_length=10, mean_diff=75, max_diff=75, lower_line_length=(- 5), upper_line_length=10, var_diff=0.005, slope_dev=8, max_iterations=20, return_components=False):
'\n Detects clear sky times according to the algorithm developed by Reno\n and Hansen for GHI measurements. The algorithm [1]_ was designed and\n validated for analyzing GHI time series only. Users may attempt to\n apply it to other types of time series data using different filter\n settings, but should be skeptical of the results.\n\n The algorithm detects clear sky times by comparing statistics for a\n measured time series and an expected clearsky time series.\n Statistics are calculated using a sliding time window (e.g., 10\n minutes). An iterative algorithm identifies clear periods, uses the\n identified periods to estimate bias in the clearsky data, scales the\n clearsky data and repeats.\n\n Clear times are identified by meeting 5 criteria. Default values for\n these thresholds are appropriate for 10 minute windows of 1 minute\n GHI data.\n\n Parameters\n ----------\n measured : array or Series\n Time series of measured GHI. [W/m2]\n clearsky : array or Series\n Time series of the expected clearsky GHI. [W/m2]\n times : DatetimeIndex or None, default None.\n Times of measured and clearsky values. If None the index of measured\n will be used.\n window_length : int, default 10\n Length of sliding time window in minutes. Must be greater than 2\n periods.\n mean_diff : float, default 75\n Threshold value for agreement between mean values of measured\n and clearsky in each interval, see Eq. 6 in [1]. [W/m2]\n max_diff : float, default 75\n Threshold value for agreement between maxima of measured and\n clearsky values in each interval, see Eq. 7 in [1]. [W/m2]\n lower_line_length : float, default -5\n Lower limit of line length criterion from Eq. 8 in [1].\n Criterion satisfied when lower_line_length < line length difference\n < upper_line_length.\n upper_line_length : float, default 10\n Upper limit of line length criterion from Eq. 8 in [1].\n var_diff : float, default 0.005\n Threshold value in Hz for the agreement between normalized\n standard deviations of rate of change in irradiance, see Eqs. 9\n through 11 in [1].\n slope_dev : float, default 8\n Threshold value for agreement between the largest magnitude of\n change in successive values, see Eqs. 12 through 14 in [1].\n max_iterations : int, default 20\n Maximum number of times to apply a different scaling factor to\n the clearsky and redetermine clear_samples. Must be 1 or larger.\n return_components : bool, default False\n Controls if additional output should be returned. See below.\n\n Returns\n -------\n clear_samples : array or Series\n Boolean array or Series of whether or not the given time is\n clear. Return type is the same as the input type.\n\n components : OrderedDict, optional\n Dict of arrays of whether or not the given time window is clear\n for each condition. Only provided if return_components is True.\n\n alpha : scalar, optional\n Scaling factor applied to the clearsky_ghi to obtain the\n detected clear_samples. Only provided if return_components is\n True.\n\n Raises\n ------\n ValueError\n If measured is not a Series and times is not provided\n NotImplementedError\n If timestamps are not equally spaced\n\n References\n ----------\n .. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear\n sky irradiance in time series of GHI measurements" Renewable Energy,\n v90, p. 520-531, 2016.\n\n Notes\n -----\n Initial implementation in MATLAB by Matthew Reno. Modifications for\n computational efficiency by Joshua Patrick and Curtis Martin. Ported\n to Python by Will Holmgren, Tony Lorenzo, and Cliff Hansen.\n\n Differences from MATLAB version:\n\n * no support for unequal times\n * automatically determines sample_interval\n * requires a reference clear sky series instead calculating one\n from a user supplied location and UTCoffset\n * parameters are controllable via keyword arguments\n * option to return individual test components and clearsky scaling\n parameter\n * uses centered windows (Matlab function uses left-aligned windows)\n '
if (times is None):
try:
times = measured.index
except AttributeError:
raise ValueError('times is required when measured is not a Series')
ispandas = isinstance(measured, pd.Series)
if (not ispandas):
meas = pd.Series(measured, index=times)
else:
meas = measured
if (not isinstance(clearsky, pd.Series)):
clear = pd.Series(clearsky, index=times)
else:
clear = clearsky
(sample_interval, samples_per_window) = _get_sample_intervals(times, window_length)
H = hankel(np.arange(samples_per_window), np.arange((samples_per_window - 1), len(times)))
(meas_mean, meas_max, meas_slope_nstd, meas_slope) = _calc_stats(meas, samples_per_window, sample_interval, H)
meas_line_length = _line_length_windowed(meas, H, samples_per_window, sample_interval)
(clear_mean, clear_max, _, clear_slope) = _calc_stats(clear, samples_per_window, sample_interval, H)
alpha = 1
for iteration in range(max_iterations):
scaled_clear = (alpha * clear)
clear_line_length = _line_length_windowed(scaled_clear, H, samples_per_window, sample_interval)
line_diff = (meas_line_length - clear_line_length)
slope_max_diff = _max_diff_windowed((meas - scaled_clear), H, samples_per_window)
c1 = (np.abs((meas_mean - (alpha * clear_mean))) < mean_diff)
c2 = (np.abs((meas_max - (alpha * clear_max))) < max_diff)
c3 = ((line_diff > lower_line_length) & (line_diff < upper_line_length))
c4 = (meas_slope_nstd < var_diff)
c5 = (slope_max_diff < slope_dev)
c6 = ((clear_mean != 0) & (~ np.isnan(clear_mean)))
clear_windows = (((((c1 & c2) & c3) & c4) & c5) & c6)
clear_samples = np.full_like(meas, False, dtype='bool')
idx = _clear_sample_index(clear_windows, samples_per_window, 'center', H)
clear_samples[idx] = True
previous_alpha = alpha
clear_meas = meas[clear_samples]
clear_clear = clear[clear_samples]
def rmse(alpha):
return np.sqrt(np.mean(((clear_meas - (alpha * clear_clear)) ** 2)))
alpha = minimize_scalar(rmse).x
if (round((alpha * 10000)) == round((previous_alpha * 10000))):
break
else:
import warnings
warnings.warn(('rescaling failed to converge after %s iterations' % max_iterations), RuntimeWarning)
if ispandas:
clear_samples = pd.Series(clear_samples, index=times)
if return_components:
components = OrderedDict()
components['mean_diff_flag'] = c1
components['max_diff_flag'] = c2
components['line_length_flag'] = c3
components['slope_nstd_flag'] = c4
components['slope_max_flag'] = c5
components['mean_nan_flag'] = c6
components['windows'] = clear_windows
components['mean_diff'] = np.abs((meas_mean - (alpha * clear_mean)))
components['max_diff'] = np.abs((meas_max - (alpha * clear_max)))
components['line_length'] = (meas_line_length - clear_line_length)
components['slope_nstd'] = meas_slope_nstd
components['slope_max'] = slope_max_diff
return (clear_samples, components, alpha)
else:
return clear_samples<|docstring|>Detects clear sky times according to the algorithm developed by Reno
and Hansen for GHI measurements. The algorithm [1]_ was designed and
validated for analyzing GHI time series only. Users may attempt to
apply it to other types of time series data using different filter
settings, but should be skeptical of the results.
The algorithm detects clear sky times by comparing statistics for a
measured time series and an expected clearsky time series.
Statistics are calculated using a sliding time window (e.g., 10
minutes). An iterative algorithm identifies clear periods, uses the
identified periods to estimate bias in the clearsky data, scales the
clearsky data and repeats.
Clear times are identified by meeting 5 criteria. Default values for
these thresholds are appropriate for 10 minute windows of 1 minute
GHI data.
Parameters
----------
measured : array or Series
Time series of measured GHI. [W/m2]
clearsky : array or Series
Time series of the expected clearsky GHI. [W/m2]
times : DatetimeIndex or None, default None.
Times of measured and clearsky values. If None the index of measured
will be used.
window_length : int, default 10
Length of sliding time window in minutes. Must be greater than 2
periods.
mean_diff : float, default 75
Threshold value for agreement between mean values of measured
and clearsky in each interval, see Eq. 6 in [1]. [W/m2]
max_diff : float, default 75
Threshold value for agreement between maxima of measured and
clearsky values in each interval, see Eq. 7 in [1]. [W/m2]
lower_line_length : float, default -5
Lower limit of line length criterion from Eq. 8 in [1].
Criterion satisfied when lower_line_length < line length difference
< upper_line_length.
upper_line_length : float, default 10
Upper limit of line length criterion from Eq. 8 in [1].
var_diff : float, default 0.005
Threshold value in Hz for the agreement between normalized
standard deviations of rate of change in irradiance, see Eqs. 9
through 11 in [1].
slope_dev : float, default 8
Threshold value for agreement between the largest magnitude of
change in successive values, see Eqs. 12 through 14 in [1].
max_iterations : int, default 20
Maximum number of times to apply a different scaling factor to
the clearsky and redetermine clear_samples. Must be 1 or larger.
return_components : bool, default False
Controls if additional output should be returned. See below.
Returns
-------
clear_samples : array or Series
Boolean array or Series of whether or not the given time is
clear. Return type is the same as the input type.
components : OrderedDict, optional
Dict of arrays of whether or not the given time window is clear
for each condition. Only provided if return_components is True.
alpha : scalar, optional
Scaling factor applied to the clearsky_ghi to obtain the
detected clear_samples. Only provided if return_components is
True.
Raises
------
ValueError
If measured is not a Series and times is not provided
NotImplementedError
If timestamps are not equally spaced
References
----------
.. [1] Reno, M.J. and C.W. Hansen, "Identification of periods of clear
sky irradiance in time series of GHI measurements" Renewable Energy,
v90, p. 520-531, 2016.
Notes
-----
Initial implementation in MATLAB by Matthew Reno. Modifications for
computational efficiency by Joshua Patrick and Curtis Martin. Ported
to Python by Will Holmgren, Tony Lorenzo, and Cliff Hansen.
Differences from MATLAB version:
* no support for unequal times
* automatically determines sample_interval
* requires a reference clear sky series instead calculating one
from a user supplied location and UTCoffset
* parameters are controllable via keyword arguments
* option to return individual test components and clearsky scaling
parameter
* uses centered windows (Matlab function uses left-aligned windows)<|endoftext|> |
fb2869211dc276024f260700c7d829098577060329a4448a0f4e374950dce20e | def bird(zenith, airmass_relative, aod380, aod500, precipitable_water, ozone=0.3, pressure=101325.0, dni_extra=1364.0, asymmetry=0.85, albedo=0.2):
'\n Bird Simple Clear Sky Broadband Solar Radiation Model\n\n Based on NREL Excel implementation by Daryl R. Myers [1, 2].\n\n Bird and Hulstrom define the zenith as the "angle between a line to\n the sun and the local zenith". There is no distinction in the paper\n between solar zenith and apparent (or refracted) zenith, but the\n relative airmass is defined using the Kasten 1966 expression, which\n requires apparent zenith. Although the formulation for calculated\n zenith is never explicitly defined in the report, since the purpose\n was to compare existing clear sky models with "rigorous radiative\n transfer models" (RTM) it is possible that apparent zenith was\n obtained as output from the RTM. However, the implentation presented\n in PVLIB is tested against the NREL Excel implementation by Daryl\n Myers which uses an analytical expression for solar zenith instead\n of apparent zenith.\n\n Parameters\n ----------\n zenith : numeric\n Solar or apparent zenith angle in degrees - see note above\n airmass_relative : numeric\n Relative airmass\n aod380 : numeric\n Aerosol optical depth [cm] measured at 380[nm]\n aod500 : numeric\n Aerosol optical depth [cm] measured at 500[nm]\n precipitable_water : numeric\n Precipitable water [cm]\n ozone : numeric\n Atmospheric ozone [cm], defaults to 0.3[cm]\n pressure : numeric\n Ambient pressure [Pa], defaults to 101325[Pa]\n dni_extra : numeric\n Extraterrestrial radiation [W/m^2], defaults to 1364[W/m^2]\n asymmetry : numeric\n Asymmetry factor, defaults to 0.85\n albedo : numeric\n Albedo, defaults to 0.2\n\n Returns\n -------\n clearsky : DataFrame (if Series input) or OrderedDict of arrays\n DataFrame/OrderedDict contains the columns/keys\n ``\'dhi\', \'dni\', \'ghi\', \'direct_horizontal\'`` in [W/m^2].\n\n See also\n --------\n pvlib.atmosphere.bird_hulstrom80_aod_bb\n pvlib.atmosphere.get_relative_airmass\n\n References\n ----------\n .. [1] R. E. Bird and R. L Hulstrom, "A Simplified Clear Sky model for\n Direct and Diffuse Insolation on Horizontal Surfaces" SERI Technical\n Report SERI/TR-642-761, Feb 1981. Solar Energy Research Institute,\n Golden, CO.\n\n .. [2] Daryl R. Myers, "Solar Radiation: Practical Modeling for Renewable\n Energy Applications", pp. 46-51 CRC Press (2013)\n\n .. [3] `NREL Bird Clear Sky Model <http://rredc.nrel.gov/solar/models/\n clearsky/>`_\n\n .. [4] `SERI/TR-642-761 <http://rredc.nrel.gov/solar/pubs/pdfs/\n tr-642-761.pdf>`_\n\n .. [5] `Error Reports <http://rredc.nrel.gov/solar/models/clearsky/\n error_reports.html>`_\n '
etr = dni_extra
ze_rad = np.deg2rad(zenith)
airmass = airmass_relative
am_press = atmosphere.get_absolute_airmass(airmass, pressure)
t_rayleigh = np.exp((((- 0.0903) * (am_press ** 0.84)) * ((1.0 + am_press) - (am_press ** 1.01))))
am_o3 = (ozone * airmass)
t_ozone = ((1.0 - ((0.1611 * am_o3) * ((1.0 + (139.48 * am_o3)) ** (- 0.3034)))) - ((0.002715 * am_o3) / ((1.0 + (0.044 * am_o3)) + (0.0003 * (am_o3 ** 2.0)))))
t_gases = np.exp(((- 0.0127) * (am_press ** 0.26)))
am_h2o = (airmass * precipitable_water)
t_water = (1.0 - ((2.4959 * am_h2o) / (((1.0 + (79.034 * am_h2o)) ** 0.6828) + (6.385 * am_h2o))))
bird_huldstrom = atmosphere.bird_hulstrom80_aod_bb(aod380, aod500)
t_aerosol = np.exp((((- (bird_huldstrom ** 0.873)) * ((1.0 + bird_huldstrom) - (bird_huldstrom ** 0.7088))) * (airmass ** 0.9108)))
taa = (1.0 - ((0.1 * ((1.0 - airmass) + (airmass ** 1.06))) * (1.0 - t_aerosol)))
rs = (0.0685 + ((1.0 - asymmetry) * (1.0 - (t_aerosol / taa))))
id_ = ((((((0.9662 * etr) * t_aerosol) * t_water) * t_gases) * t_ozone) * t_rayleigh)
ze_cos = np.where((zenith < 90), np.cos(ze_rad), 0.0)
id_nh = (id_ * ze_cos)
ias = ((((((((etr * ze_cos) * 0.79) * t_ozone) * t_gases) * t_water) * taa) * ((0.5 * (1.0 - t_rayleigh)) + (asymmetry * (1.0 - (t_aerosol / taa))))) / ((1.0 - airmass) + (airmass ** 1.02)))
gh = ((id_nh + ias) / (1.0 - (albedo * rs)))
diffuse_horiz = (gh - id_nh)
irrads = OrderedDict()
irrads['direct_horizontal'] = id_nh
irrads['ghi'] = gh
irrads['dni'] = id_
irrads['dhi'] = diffuse_horiz
if isinstance(irrads['dni'], pd.Series):
irrads = pd.DataFrame.from_dict(irrads)
return irrads | Bird Simple Clear Sky Broadband Solar Radiation Model
Based on NREL Excel implementation by Daryl R. Myers [1, 2].
Bird and Hulstrom define the zenith as the "angle between a line to
the sun and the local zenith". There is no distinction in the paper
between solar zenith and apparent (or refracted) zenith, but the
relative airmass is defined using the Kasten 1966 expression, which
requires apparent zenith. Although the formulation for calculated
zenith is never explicitly defined in the report, since the purpose
was to compare existing clear sky models with "rigorous radiative
transfer models" (RTM) it is possible that apparent zenith was
obtained as output from the RTM. However, the implentation presented
in PVLIB is tested against the NREL Excel implementation by Daryl
Myers which uses an analytical expression for solar zenith instead
of apparent zenith.
Parameters
----------
zenith : numeric
Solar or apparent zenith angle in degrees - see note above
airmass_relative : numeric
Relative airmass
aod380 : numeric
Aerosol optical depth [cm] measured at 380[nm]
aod500 : numeric
Aerosol optical depth [cm] measured at 500[nm]
precipitable_water : numeric
Precipitable water [cm]
ozone : numeric
Atmospheric ozone [cm], defaults to 0.3[cm]
pressure : numeric
Ambient pressure [Pa], defaults to 101325[Pa]
dni_extra : numeric
Extraterrestrial radiation [W/m^2], defaults to 1364[W/m^2]
asymmetry : numeric
Asymmetry factor, defaults to 0.85
albedo : numeric
Albedo, defaults to 0.2
Returns
-------
clearsky : DataFrame (if Series input) or OrderedDict of arrays
DataFrame/OrderedDict contains the columns/keys
``'dhi', 'dni', 'ghi', 'direct_horizontal'`` in [W/m^2].
See also
--------
pvlib.atmosphere.bird_hulstrom80_aod_bb
pvlib.atmosphere.get_relative_airmass
References
----------
.. [1] R. E. Bird and R. L Hulstrom, "A Simplified Clear Sky model for
Direct and Diffuse Insolation on Horizontal Surfaces" SERI Technical
Report SERI/TR-642-761, Feb 1981. Solar Energy Research Institute,
Golden, CO.
.. [2] Daryl R. Myers, "Solar Radiation: Practical Modeling for Renewable
Energy Applications", pp. 46-51 CRC Press (2013)
.. [3] `NREL Bird Clear Sky Model <http://rredc.nrel.gov/solar/models/
clearsky/>`_
.. [4] `SERI/TR-642-761 <http://rredc.nrel.gov/solar/pubs/pdfs/
tr-642-761.pdf>`_
.. [5] `Error Reports <http://rredc.nrel.gov/solar/models/clearsky/
error_reports.html>`_ | pvlib/clearsky.py | bird | Antoine-0/pvlib-python | 695 | python | def bird(zenith, airmass_relative, aod380, aod500, precipitable_water, ozone=0.3, pressure=101325.0, dni_extra=1364.0, asymmetry=0.85, albedo=0.2):
'\n Bird Simple Clear Sky Broadband Solar Radiation Model\n\n Based on NREL Excel implementation by Daryl R. Myers [1, 2].\n\n Bird and Hulstrom define the zenith as the "angle between a line to\n the sun and the local zenith". There is no distinction in the paper\n between solar zenith and apparent (or refracted) zenith, but the\n relative airmass is defined using the Kasten 1966 expression, which\n requires apparent zenith. Although the formulation for calculated\n zenith is never explicitly defined in the report, since the purpose\n was to compare existing clear sky models with "rigorous radiative\n transfer models" (RTM) it is possible that apparent zenith was\n obtained as output from the RTM. However, the implentation presented\n in PVLIB is tested against the NREL Excel implementation by Daryl\n Myers which uses an analytical expression for solar zenith instead\n of apparent zenith.\n\n Parameters\n ----------\n zenith : numeric\n Solar or apparent zenith angle in degrees - see note above\n airmass_relative : numeric\n Relative airmass\n aod380 : numeric\n Aerosol optical depth [cm] measured at 380[nm]\n aod500 : numeric\n Aerosol optical depth [cm] measured at 500[nm]\n precipitable_water : numeric\n Precipitable water [cm]\n ozone : numeric\n Atmospheric ozone [cm], defaults to 0.3[cm]\n pressure : numeric\n Ambient pressure [Pa], defaults to 101325[Pa]\n dni_extra : numeric\n Extraterrestrial radiation [W/m^2], defaults to 1364[W/m^2]\n asymmetry : numeric\n Asymmetry factor, defaults to 0.85\n albedo : numeric\n Albedo, defaults to 0.2\n\n Returns\n -------\n clearsky : DataFrame (if Series input) or OrderedDict of arrays\n DataFrame/OrderedDict contains the columns/keys\n ``\'dhi\', \'dni\', \'ghi\', \'direct_horizontal\'`` in [W/m^2].\n\n See also\n --------\n pvlib.atmosphere.bird_hulstrom80_aod_bb\n pvlib.atmosphere.get_relative_airmass\n\n References\n ----------\n .. [1] R. E. Bird and R. L Hulstrom, "A Simplified Clear Sky model for\n Direct and Diffuse Insolation on Horizontal Surfaces" SERI Technical\n Report SERI/TR-642-761, Feb 1981. Solar Energy Research Institute,\n Golden, CO.\n\n .. [2] Daryl R. Myers, "Solar Radiation: Practical Modeling for Renewable\n Energy Applications", pp. 46-51 CRC Press (2013)\n\n .. [3] `NREL Bird Clear Sky Model <http://rredc.nrel.gov/solar/models/\n clearsky/>`_\n\n .. [4] `SERI/TR-642-761 <http://rredc.nrel.gov/solar/pubs/pdfs/\n tr-642-761.pdf>`_\n\n .. [5] `Error Reports <http://rredc.nrel.gov/solar/models/clearsky/\n error_reports.html>`_\n '
etr = dni_extra
ze_rad = np.deg2rad(zenith)
airmass = airmass_relative
am_press = atmosphere.get_absolute_airmass(airmass, pressure)
t_rayleigh = np.exp((((- 0.0903) * (am_press ** 0.84)) * ((1.0 + am_press) - (am_press ** 1.01))))
am_o3 = (ozone * airmass)
t_ozone = ((1.0 - ((0.1611 * am_o3) * ((1.0 + (139.48 * am_o3)) ** (- 0.3034)))) - ((0.002715 * am_o3) / ((1.0 + (0.044 * am_o3)) + (0.0003 * (am_o3 ** 2.0)))))
t_gases = np.exp(((- 0.0127) * (am_press ** 0.26)))
am_h2o = (airmass * precipitable_water)
t_water = (1.0 - ((2.4959 * am_h2o) / (((1.0 + (79.034 * am_h2o)) ** 0.6828) + (6.385 * am_h2o))))
bird_huldstrom = atmosphere.bird_hulstrom80_aod_bb(aod380, aod500)
t_aerosol = np.exp((((- (bird_huldstrom ** 0.873)) * ((1.0 + bird_huldstrom) - (bird_huldstrom ** 0.7088))) * (airmass ** 0.9108)))
taa = (1.0 - ((0.1 * ((1.0 - airmass) + (airmass ** 1.06))) * (1.0 - t_aerosol)))
rs = (0.0685 + ((1.0 - asymmetry) * (1.0 - (t_aerosol / taa))))
id_ = ((((((0.9662 * etr) * t_aerosol) * t_water) * t_gases) * t_ozone) * t_rayleigh)
ze_cos = np.where((zenith < 90), np.cos(ze_rad), 0.0)
id_nh = (id_ * ze_cos)
ias = ((((((((etr * ze_cos) * 0.79) * t_ozone) * t_gases) * t_water) * taa) * ((0.5 * (1.0 - t_rayleigh)) + (asymmetry * (1.0 - (t_aerosol / taa))))) / ((1.0 - airmass) + (airmass ** 1.02)))
gh = ((id_nh + ias) / (1.0 - (albedo * rs)))
diffuse_horiz = (gh - id_nh)
irrads = OrderedDict()
irrads['direct_horizontal'] = id_nh
irrads['ghi'] = gh
irrads['dni'] = id_
irrads['dhi'] = diffuse_horiz
if isinstance(irrads['dni'], pd.Series):
irrads = pd.DataFrame.from_dict(irrads)
return irrads | def bird(zenith, airmass_relative, aod380, aod500, precipitable_water, ozone=0.3, pressure=101325.0, dni_extra=1364.0, asymmetry=0.85, albedo=0.2):
'\n Bird Simple Clear Sky Broadband Solar Radiation Model\n\n Based on NREL Excel implementation by Daryl R. Myers [1, 2].\n\n Bird and Hulstrom define the zenith as the "angle between a line to\n the sun and the local zenith". There is no distinction in the paper\n between solar zenith and apparent (or refracted) zenith, but the\n relative airmass is defined using the Kasten 1966 expression, which\n requires apparent zenith. Although the formulation for calculated\n zenith is never explicitly defined in the report, since the purpose\n was to compare existing clear sky models with "rigorous radiative\n transfer models" (RTM) it is possible that apparent zenith was\n obtained as output from the RTM. However, the implentation presented\n in PVLIB is tested against the NREL Excel implementation by Daryl\n Myers which uses an analytical expression for solar zenith instead\n of apparent zenith.\n\n Parameters\n ----------\n zenith : numeric\n Solar or apparent zenith angle in degrees - see note above\n airmass_relative : numeric\n Relative airmass\n aod380 : numeric\n Aerosol optical depth [cm] measured at 380[nm]\n aod500 : numeric\n Aerosol optical depth [cm] measured at 500[nm]\n precipitable_water : numeric\n Precipitable water [cm]\n ozone : numeric\n Atmospheric ozone [cm], defaults to 0.3[cm]\n pressure : numeric\n Ambient pressure [Pa], defaults to 101325[Pa]\n dni_extra : numeric\n Extraterrestrial radiation [W/m^2], defaults to 1364[W/m^2]\n asymmetry : numeric\n Asymmetry factor, defaults to 0.85\n albedo : numeric\n Albedo, defaults to 0.2\n\n Returns\n -------\n clearsky : DataFrame (if Series input) or OrderedDict of arrays\n DataFrame/OrderedDict contains the columns/keys\n ``\'dhi\', \'dni\', \'ghi\', \'direct_horizontal\'`` in [W/m^2].\n\n See also\n --------\n pvlib.atmosphere.bird_hulstrom80_aod_bb\n pvlib.atmosphere.get_relative_airmass\n\n References\n ----------\n .. [1] R. E. Bird and R. L Hulstrom, "A Simplified Clear Sky model for\n Direct and Diffuse Insolation on Horizontal Surfaces" SERI Technical\n Report SERI/TR-642-761, Feb 1981. Solar Energy Research Institute,\n Golden, CO.\n\n .. [2] Daryl R. Myers, "Solar Radiation: Practical Modeling for Renewable\n Energy Applications", pp. 46-51 CRC Press (2013)\n\n .. [3] `NREL Bird Clear Sky Model <http://rredc.nrel.gov/solar/models/\n clearsky/>`_\n\n .. [4] `SERI/TR-642-761 <http://rredc.nrel.gov/solar/pubs/pdfs/\n tr-642-761.pdf>`_\n\n .. [5] `Error Reports <http://rredc.nrel.gov/solar/models/clearsky/\n error_reports.html>`_\n '
etr = dni_extra
ze_rad = np.deg2rad(zenith)
airmass = airmass_relative
am_press = atmosphere.get_absolute_airmass(airmass, pressure)
t_rayleigh = np.exp((((- 0.0903) * (am_press ** 0.84)) * ((1.0 + am_press) - (am_press ** 1.01))))
am_o3 = (ozone * airmass)
t_ozone = ((1.0 - ((0.1611 * am_o3) * ((1.0 + (139.48 * am_o3)) ** (- 0.3034)))) - ((0.002715 * am_o3) / ((1.0 + (0.044 * am_o3)) + (0.0003 * (am_o3 ** 2.0)))))
t_gases = np.exp(((- 0.0127) * (am_press ** 0.26)))
am_h2o = (airmass * precipitable_water)
t_water = (1.0 - ((2.4959 * am_h2o) / (((1.0 + (79.034 * am_h2o)) ** 0.6828) + (6.385 * am_h2o))))
bird_huldstrom = atmosphere.bird_hulstrom80_aod_bb(aod380, aod500)
t_aerosol = np.exp((((- (bird_huldstrom ** 0.873)) * ((1.0 + bird_huldstrom) - (bird_huldstrom ** 0.7088))) * (airmass ** 0.9108)))
taa = (1.0 - ((0.1 * ((1.0 - airmass) + (airmass ** 1.06))) * (1.0 - t_aerosol)))
rs = (0.0685 + ((1.0 - asymmetry) * (1.0 - (t_aerosol / taa))))
id_ = ((((((0.9662 * etr) * t_aerosol) * t_water) * t_gases) * t_ozone) * t_rayleigh)
ze_cos = np.where((zenith < 90), np.cos(ze_rad), 0.0)
id_nh = (id_ * ze_cos)
ias = ((((((((etr * ze_cos) * 0.79) * t_ozone) * t_gases) * t_water) * taa) * ((0.5 * (1.0 - t_rayleigh)) + (asymmetry * (1.0 - (t_aerosol / taa))))) / ((1.0 - airmass) + (airmass ** 1.02)))
gh = ((id_nh + ias) / (1.0 - (albedo * rs)))
diffuse_horiz = (gh - id_nh)
irrads = OrderedDict()
irrads['direct_horizontal'] = id_nh
irrads['ghi'] = gh
irrads['dni'] = id_
irrads['dhi'] = diffuse_horiz
if isinstance(irrads['dni'], pd.Series):
irrads = pd.DataFrame.from_dict(irrads)
return irrads<|docstring|>Bird Simple Clear Sky Broadband Solar Radiation Model
Based on NREL Excel implementation by Daryl R. Myers [1, 2].
Bird and Hulstrom define the zenith as the "angle between a line to
the sun and the local zenith". There is no distinction in the paper
between solar zenith and apparent (or refracted) zenith, but the
relative airmass is defined using the Kasten 1966 expression, which
requires apparent zenith. Although the formulation for calculated
zenith is never explicitly defined in the report, since the purpose
was to compare existing clear sky models with "rigorous radiative
transfer models" (RTM) it is possible that apparent zenith was
obtained as output from the RTM. However, the implentation presented
in PVLIB is tested against the NREL Excel implementation by Daryl
Myers which uses an analytical expression for solar zenith instead
of apparent zenith.
Parameters
----------
zenith : numeric
Solar or apparent zenith angle in degrees - see note above
airmass_relative : numeric
Relative airmass
aod380 : numeric
Aerosol optical depth [cm] measured at 380[nm]
aod500 : numeric
Aerosol optical depth [cm] measured at 500[nm]
precipitable_water : numeric
Precipitable water [cm]
ozone : numeric
Atmospheric ozone [cm], defaults to 0.3[cm]
pressure : numeric
Ambient pressure [Pa], defaults to 101325[Pa]
dni_extra : numeric
Extraterrestrial radiation [W/m^2], defaults to 1364[W/m^2]
asymmetry : numeric
Asymmetry factor, defaults to 0.85
albedo : numeric
Albedo, defaults to 0.2
Returns
-------
clearsky : DataFrame (if Series input) or OrderedDict of arrays
DataFrame/OrderedDict contains the columns/keys
``'dhi', 'dni', 'ghi', 'direct_horizontal'`` in [W/m^2].
See also
--------
pvlib.atmosphere.bird_hulstrom80_aod_bb
pvlib.atmosphere.get_relative_airmass
References
----------
.. [1] R. E. Bird and R. L Hulstrom, "A Simplified Clear Sky model for
Direct and Diffuse Insolation on Horizontal Surfaces" SERI Technical
Report SERI/TR-642-761, Feb 1981. Solar Energy Research Institute,
Golden, CO.
.. [2] Daryl R. Myers, "Solar Radiation: Practical Modeling for Renewable
Energy Applications", pp. 46-51 CRC Press (2013)
.. [3] `NREL Bird Clear Sky Model <http://rredc.nrel.gov/solar/models/
clearsky/>`_
.. [4] `SERI/TR-642-761 <http://rredc.nrel.gov/solar/pubs/pdfs/
tr-642-761.pdf>`_
.. [5] `Error Reports <http://rredc.nrel.gov/solar/models/clearsky/
error_reports.html>`_<|endoftext|> |
2e701e7223c16ad1d447fa7cbb92eb89f81d09fab510382181aa638d81392a02 | def get_lock(self, path: Optional[str]=None) -> filelock.FileLock:
'\n Retrieve the appropriate `FileLock` backend for this storage plugin\n\n :param str path: The path to use for locking\n :return: A `FileLock` backend for obtaining locks\n :rtype: SwiftFileLock\n '
if (path is None):
path = self.mirror_base_path.joinpath(self.flock_path).as_posix()
logger.debug(f'Retrieving FileLock instance @ {path}')
return filelock.FileLock(path) | Retrieve the appropriate `FileLock` backend for this storage plugin
:param str path: The path to use for locking
:return: A `FileLock` backend for obtaining locks
:rtype: SwiftFileLock | src/bandersnatch_storage_plugins/filesystem.py | get_lock | terrorizer1980/bandersnatch | 310 | python | def get_lock(self, path: Optional[str]=None) -> filelock.FileLock:
'\n Retrieve the appropriate `FileLock` backend for this storage plugin\n\n :param str path: The path to use for locking\n :return: A `FileLock` backend for obtaining locks\n :rtype: SwiftFileLock\n '
if (path is None):
path = self.mirror_base_path.joinpath(self.flock_path).as_posix()
logger.debug(f'Retrieving FileLock instance @ {path}')
return filelock.FileLock(path) | def get_lock(self, path: Optional[str]=None) -> filelock.FileLock:
'\n Retrieve the appropriate `FileLock` backend for this storage plugin\n\n :param str path: The path to use for locking\n :return: A `FileLock` backend for obtaining locks\n :rtype: SwiftFileLock\n '
if (path is None):
path = self.mirror_base_path.joinpath(self.flock_path).as_posix()
logger.debug(f'Retrieving FileLock instance @ {path}')
return filelock.FileLock(path)<|docstring|>Retrieve the appropriate `FileLock` backend for this storage plugin
:param str path: The path to use for locking
:return: A `FileLock` backend for obtaining locks
:rtype: SwiftFileLock<|endoftext|> |
b871a306f347cfeda2d08f99956413d66142971a84c47f8e9f23b1d27df51c8b | def find(self, root: PATH_TYPES, dirs: bool=True) -> str:
"A test helper simulating 'find'.\n\n Iterates over directories and filenames, given as relative paths to the\n root.\n\n "
results = self.walk(root, dirs=dirs)
results.sort()
return '\n'.join((str(result.relative_to(root)) for result in results)) | A test helper simulating 'find'.
Iterates over directories and filenames, given as relative paths to the
root. | src/bandersnatch_storage_plugins/filesystem.py | find | terrorizer1980/bandersnatch | 310 | python | def find(self, root: PATH_TYPES, dirs: bool=True) -> str:
"A test helper simulating 'find'.\n\n Iterates over directories and filenames, given as relative paths to the\n root.\n\n "
results = self.walk(root, dirs=dirs)
results.sort()
return '\n'.join((str(result.relative_to(root)) for result in results)) | def find(self, root: PATH_TYPES, dirs: bool=True) -> str:
"A test helper simulating 'find'.\n\n Iterates over directories and filenames, given as relative paths to the\n root.\n\n "
results = self.walk(root, dirs=dirs)
results.sort()
return '\n'.join((str(result.relative_to(root)) for result in results))<|docstring|>A test helper simulating 'find'.
Iterates over directories and filenames, given as relative paths to the
root.<|endoftext|> |
707a770099d53e619be2219547594d9a3c037c79d9ed8a6527450ac5001e3ec2 | @contextlib.contextmanager
def rewrite(self, filepath: PATH_TYPES, mode: str='w', **kw: Any) -> Generator[(IO, None, None)]:
'Rewrite an existing file atomically to avoid programs running in\n parallel to have race conditions while reading.'
if isinstance(filepath, str):
base_dir = os.path.dirname(filepath)
filename = os.path.basename(filepath)
else:
base_dir = str(filepath.parent)
filename = filepath.name
with tempfile.NamedTemporaryFile(mode=mode, prefix=f'.{filename}.', delete=False, dir=base_dir, **kw) as f:
filepath_tmp = f.name
(yield f)
if (not self.exists(filepath_tmp)):
return
os.chmod(filepath_tmp, 33188)
logger.debug(f'Writing temporary file {filepath_tmp} to target destination: {filepath!s}')
self.move_file(filepath_tmp, filepath) | Rewrite an existing file atomically to avoid programs running in
parallel to have race conditions while reading. | src/bandersnatch_storage_plugins/filesystem.py | rewrite | terrorizer1980/bandersnatch | 310 | python | @contextlib.contextmanager
def rewrite(self, filepath: PATH_TYPES, mode: str='w', **kw: Any) -> Generator[(IO, None, None)]:
'Rewrite an existing file atomically to avoid programs running in\n parallel to have race conditions while reading.'
if isinstance(filepath, str):
base_dir = os.path.dirname(filepath)
filename = os.path.basename(filepath)
else:
base_dir = str(filepath.parent)
filename = filepath.name
with tempfile.NamedTemporaryFile(mode=mode, prefix=f'.{filename}.', delete=False, dir=base_dir, **kw) as f:
filepath_tmp = f.name
(yield f)
if (not self.exists(filepath_tmp)):
return
os.chmod(filepath_tmp, 33188)
logger.debug(f'Writing temporary file {filepath_tmp} to target destination: {filepath!s}')
self.move_file(filepath_tmp, filepath) | @contextlib.contextmanager
def rewrite(self, filepath: PATH_TYPES, mode: str='w', **kw: Any) -> Generator[(IO, None, None)]:
'Rewrite an existing file atomically to avoid programs running in\n parallel to have race conditions while reading.'
if isinstance(filepath, str):
base_dir = os.path.dirname(filepath)
filename = os.path.basename(filepath)
else:
base_dir = str(filepath.parent)
filename = filepath.name
with tempfile.NamedTemporaryFile(mode=mode, prefix=f'.{filename}.', delete=False, dir=base_dir, **kw) as f:
filepath_tmp = f.name
(yield f)
if (not self.exists(filepath_tmp)):
return
os.chmod(filepath_tmp, 33188)
logger.debug(f'Writing temporary file {filepath_tmp} to target destination: {filepath!s}')
self.move_file(filepath_tmp, filepath)<|docstring|>Rewrite an existing file atomically to avoid programs running in
parallel to have race conditions while reading.<|endoftext|> |
a08eed8f7ae075619f519d7e0b3157bdf50c38b91ba57cfc75507e8fd9dede09 | @contextlib.contextmanager
def update_safe(self, filename: PATH_TYPES, **kw: Any) -> Generator[(IO, None, None)]:
"Rewrite a file atomically.\n\n Clients are allowed to delete the tmpfile to signal that they don't\n want to have it updated.\n\n "
with tempfile.NamedTemporaryFile(dir=os.path.dirname(filename), delete=False, prefix=f'{os.path.basename(filename)}.', **kw) as tf:
if self.exists(filename):
os.chmod(tf.name, (os.stat(filename).st_mode & 4095))
tf.has_changed = False
(yield tf)
if (not self.exists(tf.name)):
return
filename_tmp = tf.name
if (self.exists(filename) and self.compare_files(filename, filename_tmp)):
logger.debug(f'File not changed...deleting temporary file: {filename_tmp}')
os.unlink(filename_tmp)
else:
logger.debug(f'Modifying destination: {filename!s} with: {filename_tmp}')
self.move_file(filename_tmp, filename) | Rewrite a file atomically.
Clients are allowed to delete the tmpfile to signal that they don't
want to have it updated. | src/bandersnatch_storage_plugins/filesystem.py | update_safe | terrorizer1980/bandersnatch | 310 | python | @contextlib.contextmanager
def update_safe(self, filename: PATH_TYPES, **kw: Any) -> Generator[(IO, None, None)]:
"Rewrite a file atomically.\n\n Clients are allowed to delete the tmpfile to signal that they don't\n want to have it updated.\n\n "
with tempfile.NamedTemporaryFile(dir=os.path.dirname(filename), delete=False, prefix=f'{os.path.basename(filename)}.', **kw) as tf:
if self.exists(filename):
os.chmod(tf.name, (os.stat(filename).st_mode & 4095))
tf.has_changed = False
(yield tf)
if (not self.exists(tf.name)):
return
filename_tmp = tf.name
if (self.exists(filename) and self.compare_files(filename, filename_tmp)):
logger.debug(f'File not changed...deleting temporary file: {filename_tmp}')
os.unlink(filename_tmp)
else:
logger.debug(f'Modifying destination: {filename!s} with: {filename_tmp}')
self.move_file(filename_tmp, filename) | @contextlib.contextmanager
def update_safe(self, filename: PATH_TYPES, **kw: Any) -> Generator[(IO, None, None)]:
"Rewrite a file atomically.\n\n Clients are allowed to delete the tmpfile to signal that they don't\n want to have it updated.\n\n "
with tempfile.NamedTemporaryFile(dir=os.path.dirname(filename), delete=False, prefix=f'{os.path.basename(filename)}.', **kw) as tf:
if self.exists(filename):
os.chmod(tf.name, (os.stat(filename).st_mode & 4095))
tf.has_changed = False
(yield tf)
if (not self.exists(tf.name)):
return
filename_tmp = tf.name
if (self.exists(filename) and self.compare_files(filename, filename_tmp)):
logger.debug(f'File not changed...deleting temporary file: {filename_tmp}')
os.unlink(filename_tmp)
else:
logger.debug(f'Modifying destination: {filename!s} with: {filename_tmp}')
self.move_file(filename_tmp, filename)<|docstring|>Rewrite a file atomically.
Clients are allowed to delete the tmpfile to signal that they don't
want to have it updated.<|endoftext|> |
3777269bbcd76a7c10df628e0b2158cc859547cfc308e4834af124d52f6d9a5d | def compare_files(self, file1: PATH_TYPES, file2: PATH_TYPES) -> bool:
'Compare two files, returning true if they are the same and False if not.'
return filecmp.cmp(str(file1), str(file2), shallow=False) | Compare two files, returning true if they are the same and False if not. | src/bandersnatch_storage_plugins/filesystem.py | compare_files | terrorizer1980/bandersnatch | 310 | python | def compare_files(self, file1: PATH_TYPES, file2: PATH_TYPES) -> bool:
return filecmp.cmp(str(file1), str(file2), shallow=False) | def compare_files(self, file1: PATH_TYPES, file2: PATH_TYPES) -> bool:
return filecmp.cmp(str(file1), str(file2), shallow=False)<|docstring|>Compare two files, returning true if they are the same and False if not.<|endoftext|> |
225a161626b3f4a0e16f4864a75ef725c0e95461e916313616ce781d45c88386 | def copy_file(self, source: PATH_TYPES, dest: PATH_TYPES) -> None:
'Copy a file from **source** to **dest**'
if (not self.exists(source)):
raise FileNotFoundError(source)
shutil.copy(source, dest)
return | Copy a file from **source** to **dest** | src/bandersnatch_storage_plugins/filesystem.py | copy_file | terrorizer1980/bandersnatch | 310 | python | def copy_file(self, source: PATH_TYPES, dest: PATH_TYPES) -> None:
if (not self.exists(source)):
raise FileNotFoundError(source)
shutil.copy(source, dest)
return | def copy_file(self, source: PATH_TYPES, dest: PATH_TYPES) -> None:
if (not self.exists(source)):
raise FileNotFoundError(source)
shutil.copy(source, dest)
return<|docstring|>Copy a file from **source** to **dest**<|endoftext|> |
b1532eaad85aad98d434b994ae93a9171e3a766b413ccef11e69a7b41319addb | def move_file(self, source: PATH_TYPES, dest: PATH_TYPES) -> None:
'Move a file from **source** to **dest**'
if (not self.exists(source)):
raise FileNotFoundError(source)
shutil.move(str(source), dest)
return | Move a file from **source** to **dest** | src/bandersnatch_storage_plugins/filesystem.py | move_file | terrorizer1980/bandersnatch | 310 | python | def move_file(self, source: PATH_TYPES, dest: PATH_TYPES) -> None:
if (not self.exists(source)):
raise FileNotFoundError(source)
shutil.move(str(source), dest)
return | def move_file(self, source: PATH_TYPES, dest: PATH_TYPES) -> None:
if (not self.exists(source)):
raise FileNotFoundError(source)
shutil.move(str(source), dest)
return<|docstring|>Move a file from **source** to **dest**<|endoftext|> |
b4d12d3790c6d57f2b4b4fc540781c29d0cf0f9a2048e20a807e4f85f7f081a3 | def write_file(self, path: PATH_TYPES, contents: Union[(str, bytes)]) -> None:
'Write data to the provided path. If **contents** is a string, the file will\n be opened and written in "r" + "utf-8" mode, if bytes are supplied it will be\n accessed using "rb" mode (i.e. binary write).'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
if isinstance(contents, str):
path.write_text(contents)
else:
path.write_bytes(contents) | Write data to the provided path. If **contents** is a string, the file will
be opened and written in "r" + "utf-8" mode, if bytes are supplied it will be
accessed using "rb" mode (i.e. binary write). | src/bandersnatch_storage_plugins/filesystem.py | write_file | terrorizer1980/bandersnatch | 310 | python | def write_file(self, path: PATH_TYPES, contents: Union[(str, bytes)]) -> None:
'Write data to the provided path. If **contents** is a string, the file will\n be opened and written in "r" + "utf-8" mode, if bytes are supplied it will be\n accessed using "rb" mode (i.e. binary write).'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
if isinstance(contents, str):
path.write_text(contents)
else:
path.write_bytes(contents) | def write_file(self, path: PATH_TYPES, contents: Union[(str, bytes)]) -> None:
'Write data to the provided path. If **contents** is a string, the file will\n be opened and written in "r" + "utf-8" mode, if bytes are supplied it will be\n accessed using "rb" mode (i.e. binary write).'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
if isinstance(contents, str):
path.write_text(contents)
else:
path.write_bytes(contents)<|docstring|>Write data to the provided path. If **contents** is a string, the file will
be opened and written in "r" + "utf-8" mode, if bytes are supplied it will be
accessed using "rb" mode (i.e. binary write).<|endoftext|> |
7a95bdaf778963b1d397b3a15d7a05bc84d729631f5c72c1c265ae92cc33fced | @contextlib.contextmanager
def open_file(self, path: PATH_TYPES, text: bool=True, encoding: str='utf-8') -> Generator[(IO, None, None)]:
"Yield a file context to iterate over. If text is true, open the file with\n 'rb' mode specified."
mode = ('r' if text else 'rb')
kwargs: Dict[(str, str)] = {}
if text:
kwargs['encoding'] = encoding
with open(path, mode=mode, **kwargs) as fh:
(yield fh) | Yield a file context to iterate over. If text is true, open the file with
'rb' mode specified. | src/bandersnatch_storage_plugins/filesystem.py | open_file | terrorizer1980/bandersnatch | 310 | python | @contextlib.contextmanager
def open_file(self, path: PATH_TYPES, text: bool=True, encoding: str='utf-8') -> Generator[(IO, None, None)]:
"Yield a file context to iterate over. If text is true, open the file with\n 'rb' mode specified."
mode = ('r' if text else 'rb')
kwargs: Dict[(str, str)] = {}
if text:
kwargs['encoding'] = encoding
with open(path, mode=mode, **kwargs) as fh:
(yield fh) | @contextlib.contextmanager
def open_file(self, path: PATH_TYPES, text: bool=True, encoding: str='utf-8') -> Generator[(IO, None, None)]:
"Yield a file context to iterate over. If text is true, open the file with\n 'rb' mode specified."
mode = ('r' if text else 'rb')
kwargs: Dict[(str, str)] = {}
if text:
kwargs['encoding'] = encoding
with open(path, mode=mode, **kwargs) as fh:
(yield fh)<|docstring|>Yield a file context to iterate over. If text is true, open the file with
'rb' mode specified.<|endoftext|> |
d1a96cc3e66bbcb250a63dccf67bde3db8000d29e1445a6f6d706bed1c90e612 | def read_file(self, path: PATH_TYPES, text: bool=True, encoding: str='utf-8', errors: Optional[str]=None) -> Union[(str, bytes)]:
'Return the contents of the requested file, either a bytestring or a unicode\n string depending on whether **text** is True'
with self.open_file(path, text=text, encoding=encoding) as fh:
contents: Union[(str, bytes)] = fh.read()
return contents | Return the contents of the requested file, either a bytestring or a unicode
string depending on whether **text** is True | src/bandersnatch_storage_plugins/filesystem.py | read_file | terrorizer1980/bandersnatch | 310 | python | def read_file(self, path: PATH_TYPES, text: bool=True, encoding: str='utf-8', errors: Optional[str]=None) -> Union[(str, bytes)]:
'Return the contents of the requested file, either a bytestring or a unicode\n string depending on whether **text** is True'
with self.open_file(path, text=text, encoding=encoding) as fh:
contents: Union[(str, bytes)] = fh.read()
return contents | def read_file(self, path: PATH_TYPES, text: bool=True, encoding: str='utf-8', errors: Optional[str]=None) -> Union[(str, bytes)]:
'Return the contents of the requested file, either a bytestring or a unicode\n string depending on whether **text** is True'
with self.open_file(path, text=text, encoding=encoding) as fh:
contents: Union[(str, bytes)] = fh.read()
return contents<|docstring|>Return the contents of the requested file, either a bytestring or a unicode
string depending on whether **text** is True<|endoftext|> |
b7e15320a66b9e6cf9627b64daaf39975dab16444aa8c0290f1b22f6bf3e4419 | def delete_file(self, path: PATH_TYPES, dry_run: bool=False) -> int:
'Delete the provided path, recursively if necessary.'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
log_prefix = ('[DRY RUN] ' if dry_run else '')
logger.info(f'{log_prefix}Removing file: {path!s}')
if (not dry_run):
path.unlink()
return 0 | Delete the provided path, recursively if necessary. | src/bandersnatch_storage_plugins/filesystem.py | delete_file | terrorizer1980/bandersnatch | 310 | python | def delete_file(self, path: PATH_TYPES, dry_run: bool=False) -> int:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
log_prefix = ('[DRY RUN] ' if dry_run else )
logger.info(f'{log_prefix}Removing file: {path!s}')
if (not dry_run):
path.unlink()
return 0 | def delete_file(self, path: PATH_TYPES, dry_run: bool=False) -> int:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
log_prefix = ('[DRY RUN] ' if dry_run else )
logger.info(f'{log_prefix}Removing file: {path!s}')
if (not dry_run):
path.unlink()
return 0<|docstring|>Delete the provided path, recursively if necessary.<|endoftext|> |
a256342245ab01fbcbe1fe9fea821fcf10b7bcbfd315cee37225cdbbca2d71c6 | def mkdir(self, path: PATH_TYPES, exist_ok: bool=False, parents: bool=False) -> None:
'Create the provided directory'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.mkdir(exist_ok=exist_ok, parents=parents) | Create the provided directory | src/bandersnatch_storage_plugins/filesystem.py | mkdir | terrorizer1980/bandersnatch | 310 | python | def mkdir(self, path: PATH_TYPES, exist_ok: bool=False, parents: bool=False) -> None:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.mkdir(exist_ok=exist_ok, parents=parents) | def mkdir(self, path: PATH_TYPES, exist_ok: bool=False, parents: bool=False) -> None:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.mkdir(exist_ok=exist_ok, parents=parents)<|docstring|>Create the provided directory<|endoftext|> |
e9c808b459a8d343e8a963c05e6ed53a02e8c2dab9811c10b702911506698a7b | def rmdir(self, path: PATH_TYPES, recurse: bool=False, force: bool=False, ignore_errors: bool=False, dry_run: bool=False) -> int:
'Remove the directory. If recurse is True, allow removing empty children.\n If force is true, remove contents destructively.'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
log_prefix = ('[DRY RUN] ' if dry_run else '')
if force:
logger.info(f'{log_prefix}Forcing removal of files under {path!s}')
if (not dry_run):
shutil.rmtree(path, ignore_errors=ignore_errors)
return 0
if recurse:
for subdir in path.iterdir():
if (not subdir.is_dir()):
continue
logger.info(f'{log_prefix}Removing directory: {subdir!s}')
if (not dry_run):
rc = self.rmdir(subdir, recurse=recurse, force=force, ignore_errors=ignore_errors)
if (rc != 0):
return rc
logger.info(f'{log_prefix}Removing directory: {path!s}')
if (not dry_run):
path.rmdir()
return 0 | Remove the directory. If recurse is True, allow removing empty children.
If force is true, remove contents destructively. | src/bandersnatch_storage_plugins/filesystem.py | rmdir | terrorizer1980/bandersnatch | 310 | python | def rmdir(self, path: PATH_TYPES, recurse: bool=False, force: bool=False, ignore_errors: bool=False, dry_run: bool=False) -> int:
'Remove the directory. If recurse is True, allow removing empty children.\n If force is true, remove contents destructively.'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
log_prefix = ('[DRY RUN] ' if dry_run else )
if force:
logger.info(f'{log_prefix}Forcing removal of files under {path!s}')
if (not dry_run):
shutil.rmtree(path, ignore_errors=ignore_errors)
return 0
if recurse:
for subdir in path.iterdir():
if (not subdir.is_dir()):
continue
logger.info(f'{log_prefix}Removing directory: {subdir!s}')
if (not dry_run):
rc = self.rmdir(subdir, recurse=recurse, force=force, ignore_errors=ignore_errors)
if (rc != 0):
return rc
logger.info(f'{log_prefix}Removing directory: {path!s}')
if (not dry_run):
path.rmdir()
return 0 | def rmdir(self, path: PATH_TYPES, recurse: bool=False, force: bool=False, ignore_errors: bool=False, dry_run: bool=False) -> int:
'Remove the directory. If recurse is True, allow removing empty children.\n If force is true, remove contents destructively.'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
log_prefix = ('[DRY RUN] ' if dry_run else )
if force:
logger.info(f'{log_prefix}Forcing removal of files under {path!s}')
if (not dry_run):
shutil.rmtree(path, ignore_errors=ignore_errors)
return 0
if recurse:
for subdir in path.iterdir():
if (not subdir.is_dir()):
continue
logger.info(f'{log_prefix}Removing directory: {subdir!s}')
if (not dry_run):
rc = self.rmdir(subdir, recurse=recurse, force=force, ignore_errors=ignore_errors)
if (rc != 0):
return rc
logger.info(f'{log_prefix}Removing directory: {path!s}')
if (not dry_run):
path.rmdir()
return 0<|docstring|>Remove the directory. If recurse is True, allow removing empty children.
If force is true, remove contents destructively.<|endoftext|> |
cf2cef59ffb0e1781b1331f6c37e2169ded2011dc4b0d607e90b7b551c965ffa | def exists(self, path: PATH_TYPES) -> bool:
'Check whether the provided path exists'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.exists() | Check whether the provided path exists | src/bandersnatch_storage_plugins/filesystem.py | exists | terrorizer1980/bandersnatch | 310 | python | def exists(self, path: PATH_TYPES) -> bool:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.exists() | def exists(self, path: PATH_TYPES) -> bool:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.exists()<|docstring|>Check whether the provided path exists<|endoftext|> |
2cade71a2a11bdfb404e51e044c06eb05b1333ae789b481075e648871230c7b6 | def is_dir(self, path: PATH_TYPES) -> bool:
'Check whether the provided path is a directory.'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.is_dir() | Check whether the provided path is a directory. | src/bandersnatch_storage_plugins/filesystem.py | is_dir | terrorizer1980/bandersnatch | 310 | python | def is_dir(self, path: PATH_TYPES) -> bool:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.is_dir() | def is_dir(self, path: PATH_TYPES) -> bool:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.is_dir()<|docstring|>Check whether the provided path is a directory.<|endoftext|> |
5491bf2a81365d1b221660096c8015a19a44935842d3cddd61a9535b242144cc | def is_file(self, path: PATH_TYPES) -> bool:
'Check whether the provided path is a file.'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.is_file() | Check whether the provided path is a file. | src/bandersnatch_storage_plugins/filesystem.py | is_file | terrorizer1980/bandersnatch | 310 | python | def is_file(self, path: PATH_TYPES) -> bool:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.is_file() | def is_file(self, path: PATH_TYPES) -> bool:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.is_file()<|docstring|>Check whether the provided path is a file.<|endoftext|> |
142093f44b5dbbb3a6646e3073a57201e1519b9ad1272b566bb349e5950b4728 | def get_file_size(self, path: PATH_TYPES) -> int:
'Return the file size of provided path.'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.stat().st_size | Return the file size of provided path. | src/bandersnatch_storage_plugins/filesystem.py | get_file_size | terrorizer1980/bandersnatch | 310 | python | def get_file_size(self, path: PATH_TYPES) -> int:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.stat().st_size | def get_file_size(self, path: PATH_TYPES) -> int:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
return path.stat().st_size<|docstring|>Return the file size of provided path.<|endoftext|> |
10d466d24577ab06ec33d1eff78f0f76c777d7faaf80d126521c94097c731d6c | def set_upload_time(self, path: PATH_TYPES, time: datetime.datetime) -> None:
'Set the upload time of a given **path**'
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
ts = time.timestamp()
os.utime(path, (ts, ts)) | Set the upload time of a given **path** | src/bandersnatch_storage_plugins/filesystem.py | set_upload_time | terrorizer1980/bandersnatch | 310 | python | def set_upload_time(self, path: PATH_TYPES, time: datetime.datetime) -> None:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
ts = time.timestamp()
os.utime(path, (ts, ts)) | def set_upload_time(self, path: PATH_TYPES, time: datetime.datetime) -> None:
if (not isinstance(path, pathlib.Path)):
path = pathlib.Path(path)
ts = time.timestamp()
os.utime(path, (ts, ts))<|docstring|>Set the upload time of a given **path**<|endoftext|> |
56b2d01be53df4b8c4edbda719d9c6258ce41557c717c832e42b8eb311b4a91b | def setup(hass, base_config):
'Setup the Lutron component.'
from pylutron import Lutron
hass.data[LUTRON_CONTROLLER] = None
hass.data[LUTRON_DEVICES] = {'light': []}
hass.data[LUTRON_GROUPS] = {}
config = base_config.get(DOMAIN)
hass.data[LUTRON_CONTROLLER] = Lutron(config['lutron_host'], config['lutron_user'], config['lutron_password'])
hass.data[LUTRON_CONTROLLER].load_xml_db()
hass.data[LUTRON_CONTROLLER].connect()
_LOGGER.info('Connected to Main Repeater at %s', config['lutron_host'])
group = get_component('group')
for area in hass.data[LUTRON_CONTROLLER].areas:
if (area.name not in hass.data[LUTRON_GROUPS]):
grp = group.Group.create_group(hass, area.name, [])
hass.data[LUTRON_GROUPS][area.name] = grp
for output in area.outputs:
hass.data[LUTRON_DEVICES]['light'].append((area.name, output))
for component in ('light',):
discovery.load_platform(hass, component, DOMAIN, None, base_config)
return True | Setup the Lutron component. | homeassistant/components/lutron.py | setup | loraxx753/skynet | 2 | python | def setup(hass, base_config):
from pylutron import Lutron
hass.data[LUTRON_CONTROLLER] = None
hass.data[LUTRON_DEVICES] = {'light': []}
hass.data[LUTRON_GROUPS] = {}
config = base_config.get(DOMAIN)
hass.data[LUTRON_CONTROLLER] = Lutron(config['lutron_host'], config['lutron_user'], config['lutron_password'])
hass.data[LUTRON_CONTROLLER].load_xml_db()
hass.data[LUTRON_CONTROLLER].connect()
_LOGGER.info('Connected to Main Repeater at %s', config['lutron_host'])
group = get_component('group')
for area in hass.data[LUTRON_CONTROLLER].areas:
if (area.name not in hass.data[LUTRON_GROUPS]):
grp = group.Group.create_group(hass, area.name, [])
hass.data[LUTRON_GROUPS][area.name] = grp
for output in area.outputs:
hass.data[LUTRON_DEVICES]['light'].append((area.name, output))
for component in ('light',):
discovery.load_platform(hass, component, DOMAIN, None, base_config)
return True | def setup(hass, base_config):
from pylutron import Lutron
hass.data[LUTRON_CONTROLLER] = None
hass.data[LUTRON_DEVICES] = {'light': []}
hass.data[LUTRON_GROUPS] = {}
config = base_config.get(DOMAIN)
hass.data[LUTRON_CONTROLLER] = Lutron(config['lutron_host'], config['lutron_user'], config['lutron_password'])
hass.data[LUTRON_CONTROLLER].load_xml_db()
hass.data[LUTRON_CONTROLLER].connect()
_LOGGER.info('Connected to Main Repeater at %s', config['lutron_host'])
group = get_component('group')
for area in hass.data[LUTRON_CONTROLLER].areas:
if (area.name not in hass.data[LUTRON_GROUPS]):
grp = group.Group.create_group(hass, area.name, [])
hass.data[LUTRON_GROUPS][area.name] = grp
for output in area.outputs:
hass.data[LUTRON_DEVICES]['light'].append((area.name, output))
for component in ('light',):
discovery.load_platform(hass, component, DOMAIN, None, base_config)
return True<|docstring|>Setup the Lutron component.<|endoftext|> |
ce3a36179e4b12786b1b6e4eb39c25b89de819ce9ff9a39560d9a216bfa238aa | def __init__(self, hass, domain, area_name, lutron_device, controller):
'Initialize the device.'
self._lutron_device = lutron_device
self._controller = controller
self._area_name = area_name
self.hass = hass
object_id = '{} {}'.format(area_name, lutron_device.name)
self.entity_id = generate_entity_id((domain + '.{}'), object_id, hass=hass)
self._controller.subscribe(self._lutron_device, self._update_callback) | Initialize the device. | homeassistant/components/lutron.py | __init__ | loraxx753/skynet | 2 | python | def __init__(self, hass, domain, area_name, lutron_device, controller):
self._lutron_device = lutron_device
self._controller = controller
self._area_name = area_name
self.hass = hass
object_id = '{} {}'.format(area_name, lutron_device.name)
self.entity_id = generate_entity_id((domain + '.{}'), object_id, hass=hass)
self._controller.subscribe(self._lutron_device, self._update_callback) | def __init__(self, hass, domain, area_name, lutron_device, controller):
self._lutron_device = lutron_device
self._controller = controller
self._area_name = area_name
self.hass = hass
object_id = '{} {}'.format(area_name, lutron_device.name)
self.entity_id = generate_entity_id((domain + '.{}'), object_id, hass=hass)
self._controller.subscribe(self._lutron_device, self._update_callback)<|docstring|>Initialize the device.<|endoftext|> |
82f5bb6e516fe265161aaa7b9e743ffbbe67b6c2eb88f8ba247099c434b58af0 | def _update_callback(self, _device):
'Callback invoked by pylutron when the device state changes.'
self.schedule_update_ha_state() | Callback invoked by pylutron when the device state changes. | homeassistant/components/lutron.py | _update_callback | loraxx753/skynet | 2 | python | def _update_callback(self, _device):
self.schedule_update_ha_state() | def _update_callback(self, _device):
self.schedule_update_ha_state()<|docstring|>Callback invoked by pylutron when the device state changes.<|endoftext|> |
d2fa5ef853544a33241f5a41f0b6f30e5ab3168f053a76324b2d814e7e13c6ce | @property
def name(self):
'Return the name of the device.'
return self._lutron_device.name | Return the name of the device. | homeassistant/components/lutron.py | name | loraxx753/skynet | 2 | python | @property
def name(self):
return self._lutron_device.name | @property
def name(self):
return self._lutron_device.name<|docstring|>Return the name of the device.<|endoftext|> |
53669033a44cc2b7f0c0eb1c203b1e7a7c81e72e96769d5c38bc62208b72137f | @property
def should_poll(self):
'No polling needed.'
return False | No polling needed. | homeassistant/components/lutron.py | should_poll | loraxx753/skynet | 2 | python | @property
def should_poll(self):
return False | @property
def should_poll(self):
return False<|docstring|>No polling needed.<|endoftext|> |
8e65e1fef013a9f10868ac58160266b69b27c971ad46940268afcb7afee4b19f | def get_memories(tree):
'Given a Devicetree, get the list of memories to describe in the\n linker script'
regions = get_chosen_regions(tree)
compute_address_ranges(regions)
memories = invert_regions_to_memories(regions)
compute_attributes(memories)
format_hex(memories)
return memories | Given a Devicetree, get the list of memories to describe in the
linker script | memory_map.py | get_memories | sifive/metal-depend | 0 | python | def get_memories(tree):
'Given a Devicetree, get the list of memories to describe in the\n linker script'
regions = get_chosen_regions(tree)
compute_address_ranges(regions)
memories = invert_regions_to_memories(regions)
compute_attributes(memories)
format_hex(memories)
return memories | def get_memories(tree):
'Given a Devicetree, get the list of memories to describe in the\n linker script'
regions = get_chosen_regions(tree)
compute_address_ranges(regions)
memories = invert_regions_to_memories(regions)
compute_attributes(memories)
format_hex(memories)
return memories<|docstring|>Given a Devicetree, get the list of memories to describe in the
linker script<|endoftext|> |
65b5d14a6abfdc0410e775fa2e7bec67e41f5ecee25c82616ad36f1f44d224e4 | def get_load_map(memories, scratchpad):
'Given the list of memories in the linker script, get the lma/vma\n pairs for each of the regions in the linker script'
ram = dict()
rom = dict()
itim = dict()
if ('testram' in memories):
rom['vma'] = 'testram'
ram['lma'] = 'testram'
ram['vma'] = 'testram'
itim['lma'] = 'testram'
if ('itim' in memories['testram']['contents']):
itim['vma'] = 'testram'
elif ('itim' in memories):
itim['vma'] = 'itim'
else:
itim['vma'] = 'testram'
else:
if scratchpad:
hex_load = 'ram'
else:
hex_load = 'rom'
rom['vma'] = hex_load
ram['lma'] = hex_load
ram['vma'] = 'ram'
itim['lma'] = hex_load
if ('itim' in memories['rom']['contents']):
itim['vma'] = hex_load
elif ('itim' in memories['ram']['contents']):
itim['vma'] = 'ram'
elif ('itim' in memories):
itim['vma'] = 'itim'
else:
itim['vma'] = hex_load
return (ram, rom, itim) | Given the list of memories in the linker script, get the lma/vma
pairs for each of the regions in the linker script | memory_map.py | get_load_map | sifive/metal-depend | 0 | python | def get_load_map(memories, scratchpad):
'Given the list of memories in the linker script, get the lma/vma\n pairs for each of the regions in the linker script'
ram = dict()
rom = dict()
itim = dict()
if ('testram' in memories):
rom['vma'] = 'testram'
ram['lma'] = 'testram'
ram['vma'] = 'testram'
itim['lma'] = 'testram'
if ('itim' in memories['testram']['contents']):
itim['vma'] = 'testram'
elif ('itim' in memories):
itim['vma'] = 'itim'
else:
itim['vma'] = 'testram'
else:
if scratchpad:
hex_load = 'ram'
else:
hex_load = 'rom'
rom['vma'] = hex_load
ram['lma'] = hex_load
ram['vma'] = 'ram'
itim['lma'] = hex_load
if ('itim' in memories['rom']['contents']):
itim['vma'] = hex_load
elif ('itim' in memories['ram']['contents']):
itim['vma'] = 'ram'
elif ('itim' in memories):
itim['vma'] = 'itim'
else:
itim['vma'] = hex_load
return (ram, rom, itim) | def get_load_map(memories, scratchpad):
'Given the list of memories in the linker script, get the lma/vma\n pairs for each of the regions in the linker script'
ram = dict()
rom = dict()
itim = dict()
if ('testram' in memories):
rom['vma'] = 'testram'
ram['lma'] = 'testram'
ram['vma'] = 'testram'
itim['lma'] = 'testram'
if ('itim' in memories['testram']['contents']):
itim['vma'] = 'testram'
elif ('itim' in memories):
itim['vma'] = 'itim'
else:
itim['vma'] = 'testram'
else:
if scratchpad:
hex_load = 'ram'
else:
hex_load = 'rom'
rom['vma'] = hex_load
ram['lma'] = hex_load
ram['vma'] = 'ram'
itim['lma'] = hex_load
if ('itim' in memories['rom']['contents']):
itim['vma'] = hex_load
elif ('itim' in memories['ram']['contents']):
itim['vma'] = 'ram'
elif ('itim' in memories):
itim['vma'] = 'itim'
else:
itim['vma'] = hex_load
return (ram, rom, itim)<|docstring|>Given the list of memories in the linker script, get the lma/vma
pairs for each of the regions in the linker script<|endoftext|> |
413cae28c4d3ab8e46c6405873ac76cda9c6d74e7504af25c23210e23f452896 | def get_chosen_region(dts, chosen_property_name):
'Extract the requested address region from the chosen property'
chosen_property = dts.chosen(chosen_property_name)
if chosen_property:
chosen_node = dts.get_by_reference(chosen_property[0])
chosen_region = chosen_property[1]
chosen_offset = chosen_property[2]
return {'node': chosen_node, 'region': chosen_region, 'offset': chosen_offset}
return None | Extract the requested address region from the chosen property | memory_map.py | get_chosen_region | sifive/metal-depend | 0 | python | def get_chosen_region(dts, chosen_property_name):
chosen_property = dts.chosen(chosen_property_name)
if chosen_property:
chosen_node = dts.get_by_reference(chosen_property[0])
chosen_region = chosen_property[1]
chosen_offset = chosen_property[2]
return {'node': chosen_node, 'region': chosen_region, 'offset': chosen_offset}
return None | def get_chosen_region(dts, chosen_property_name):
chosen_property = dts.chosen(chosen_property_name)
if chosen_property:
chosen_node = dts.get_by_reference(chosen_property[0])
chosen_region = chosen_property[1]
chosen_offset = chosen_property[2]
return {'node': chosen_node, 'region': chosen_region, 'offset': chosen_offset}
return None<|docstring|>Extract the requested address region from the chosen property<|endoftext|> |
5ba6f8f0082ed609d5fd930e97fa4d4e8dbdc36958798992a754de8ac875291a | def get_chosen_regions(tree):
'Given the tree, get the regions requested by chosen properties.\n Exits with an error if required properties are missing or the\n parameters are invalid'
regions = {'entry': get_chosen_region(tree, 'metal,entry'), 'ram': get_chosen_region(tree, 'metal,ram'), 'itim': get_chosen_region(tree, 'metal,itim')}
if (regions['entry'] is None):
print('ERROR: metal,entry is not defined by the Devicetree')
sys.exit(1)
if (regions['ram'] is None):
print('ERROR: metal,ram is not defined by the Devicetree')
sys.exit(1)
return regions | Given the tree, get the regions requested by chosen properties.
Exits with an error if required properties are missing or the
parameters are invalid | memory_map.py | get_chosen_regions | sifive/metal-depend | 0 | python | def get_chosen_regions(tree):
'Given the tree, get the regions requested by chosen properties.\n Exits with an error if required properties are missing or the\n parameters are invalid'
regions = {'entry': get_chosen_region(tree, 'metal,entry'), 'ram': get_chosen_region(tree, 'metal,ram'), 'itim': get_chosen_region(tree, 'metal,itim')}
if (regions['entry'] is None):
print('ERROR: metal,entry is not defined by the Devicetree')
sys.exit(1)
if (regions['ram'] is None):
print('ERROR: metal,ram is not defined by the Devicetree')
sys.exit(1)
return regions | def get_chosen_regions(tree):
'Given the tree, get the regions requested by chosen properties.\n Exits with an error if required properties are missing or the\n parameters are invalid'
regions = {'entry': get_chosen_region(tree, 'metal,entry'), 'ram': get_chosen_region(tree, 'metal,ram'), 'itim': get_chosen_region(tree, 'metal,itim')}
if (regions['entry'] is None):
print('ERROR: metal,entry is not defined by the Devicetree')
sys.exit(1)
if (regions['ram'] is None):
print('ERROR: metal,ram is not defined by the Devicetree')
sys.exit(1)
return regions<|docstring|>Given the tree, get the regions requested by chosen properties.
Exits with an error if required properties are missing or the
parameters are invalid<|endoftext|> |
f9750ad1c9ca47b1a2b07ebd1212aea712f2f57262c1cd051b65411d37a623fb | def compute_address_range(region):
'Extract the address range from the reg of the Node'
reg = region['node'].get_reg()
base = (reg[region['region']][0] + region['offset'])
length = (reg[region['region']][1] - region['offset'])
region['base'] = base
region['length'] = length | Extract the address range from the reg of the Node | memory_map.py | compute_address_range | sifive/metal-depend | 0 | python | def compute_address_range(region):
reg = region['node'].get_reg()
base = (reg[region['region']][0] + region['offset'])
length = (reg[region['region']][1] - region['offset'])
region['base'] = base
region['length'] = length | def compute_address_range(region):
reg = region['node'].get_reg()
base = (reg[region['region']][0] + region['offset'])
length = (reg[region['region']][1] - region['offset'])
region['base'] = base
region['length'] = length<|docstring|>Extract the address range from the reg of the Node<|endoftext|> |
3ccece3618de3c6ba04c65e93cd86186d75c6b5025f794b51f175769aa80ddc9 | def compute_address_ranges(regions):
'Given the requested regions, compute the effective address ranges\n to use for each'
for (_, region) in regions.items():
if (region is not None):
compute_address_range(region)
region_values = [r for r in regions.values() if (r is not None)]
nodes = list({region['node'] for region in region_values})
for node in nodes:
partition = [region for region in region_values if (region['node'] == node)]
partition.sort(key=(lambda x: x['offset']))
if ((len(partition) >= 2) and (partition[0]['base'] != partition[1]['base'])):
partition[0]['length'] = (partition[1]['base'] - partition[0]['base'])
if ((len(partition) >= 3) and (partition[1]['base'] != partition[2]['base'])):
partition[1]['length'] = (partition[2]['base'] - partition[1]['base'])
return regions | Given the requested regions, compute the effective address ranges
to use for each | memory_map.py | compute_address_ranges | sifive/metal-depend | 0 | python | def compute_address_ranges(regions):
'Given the requested regions, compute the effective address ranges\n to use for each'
for (_, region) in regions.items():
if (region is not None):
compute_address_range(region)
region_values = [r for r in regions.values() if (r is not None)]
nodes = list({region['node'] for region in region_values})
for node in nodes:
partition = [region for region in region_values if (region['node'] == node)]
partition.sort(key=(lambda x: x['offset']))
if ((len(partition) >= 2) and (partition[0]['base'] != partition[1]['base'])):
partition[0]['length'] = (partition[1]['base'] - partition[0]['base'])
if ((len(partition) >= 3) and (partition[1]['base'] != partition[2]['base'])):
partition[1]['length'] = (partition[2]['base'] - partition[1]['base'])
return regions | def compute_address_ranges(regions):
'Given the requested regions, compute the effective address ranges\n to use for each'
for (_, region) in regions.items():
if (region is not None):
compute_address_range(region)
region_values = [r for r in regions.values() if (r is not None)]
nodes = list({region['node'] for region in region_values})
for node in nodes:
partition = [region for region in region_values if (region['node'] == node)]
partition.sort(key=(lambda x: x['offset']))
if ((len(partition) >= 2) and (partition[0]['base'] != partition[1]['base'])):
partition[0]['length'] = (partition[1]['base'] - partition[0]['base'])
if ((len(partition) >= 3) and (partition[1]['base'] != partition[2]['base'])):
partition[1]['length'] = (partition[2]['base'] - partition[1]['base'])
return regions<|docstring|>Given the requested regions, compute the effective address ranges
to use for each<|endoftext|> |
3bf0a3aafe169deb43803ec5c244d46b143b0ec2944a1a64a6327ae260eaae16 | def regions_overlap(region_a, region_b):
'Test if regions are identical'
if ((region_a is None) or (region_b is None)):
return False
return ((region_a['base'] == region_b['base']) and (region_a['length'] == region_b['length'])) | Test if regions are identical | memory_map.py | regions_overlap | sifive/metal-depend | 0 | python | def regions_overlap(region_a, region_b):
if ((region_a is None) or (region_b is None)):
return False
return ((region_a['base'] == region_b['base']) and (region_a['length'] == region_b['length'])) | def regions_overlap(region_a, region_b):
if ((region_a is None) or (region_b is None)):
return False
return ((region_a['base'] == region_b['base']) and (region_a['length'] == region_b['length']))<|docstring|>Test if regions are identical<|endoftext|> |
027f623bb8301b7c28cd16578a6708a239da19e7e16179ad03ea9ea5be116d32 | def invert_regions_to_memories(regions):
'Given the requested regions with computed effective address ranges,\n invert the data structure to get the list of memories for the\n linker script'
memories = dict()
if regions_overlap(regions['ram'], regions['entry']):
memories['testram'] = {'name': 'testram', 'base': regions['ram']['base'], 'length': regions['ram']['length'], 'contents': ['ram', 'entry'], 'path': regions['ram']['node'].get_path()}
if regions_overlap(regions['itim'], regions['entry']):
memories['testram']['contents'].append('itim')
elif (regions['itim'] is not None):
memories['itim'] = {'name': 'itim', 'base': regions['itim']['base'], 'length': regions['itim']['length'], 'contents': ['itim'], 'path': regions['itim']['node'].get_path()}
else:
memories['rom'] = {'name': 'rom', 'base': regions['entry']['base'], 'length': regions['entry']['length'], 'contents': ['entry'], 'path': regions['entry']['node'].get_path()}
memories['ram'] = {'name': 'ram', 'base': regions['ram']['base'], 'length': regions['ram']['length'], 'contents': ['ram'], 'path': regions['ram']['node'].get_path()}
if regions_overlap(regions['entry'], regions['itim']):
memories['rom']['contents'].append('itim')
elif regions_overlap(regions['ram'], regions['itim']):
memories['ram']['contents'].append('itim')
elif (regions['itim'] is None):
memories['ram']['contents'].append('itim')
else:
memories['itim'] = {'name': 'itim', 'base': regions['itim']['base'], 'length': regions['itim']['length'], 'contents': ['itim'], 'path': regions['itim']['node'].get_path()}
return memories | Given the requested regions with computed effective address ranges,
invert the data structure to get the list of memories for the
linker script | memory_map.py | invert_regions_to_memories | sifive/metal-depend | 0 | python | def invert_regions_to_memories(regions):
'Given the requested regions with computed effective address ranges,\n invert the data structure to get the list of memories for the\n linker script'
memories = dict()
if regions_overlap(regions['ram'], regions['entry']):
memories['testram'] = {'name': 'testram', 'base': regions['ram']['base'], 'length': regions['ram']['length'], 'contents': ['ram', 'entry'], 'path': regions['ram']['node'].get_path()}
if regions_overlap(regions['itim'], regions['entry']):
memories['testram']['contents'].append('itim')
elif (regions['itim'] is not None):
memories['itim'] = {'name': 'itim', 'base': regions['itim']['base'], 'length': regions['itim']['length'], 'contents': ['itim'], 'path': regions['itim']['node'].get_path()}
else:
memories['rom'] = {'name': 'rom', 'base': regions['entry']['base'], 'length': regions['entry']['length'], 'contents': ['entry'], 'path': regions['entry']['node'].get_path()}
memories['ram'] = {'name': 'ram', 'base': regions['ram']['base'], 'length': regions['ram']['length'], 'contents': ['ram'], 'path': regions['ram']['node'].get_path()}
if regions_overlap(regions['entry'], regions['itim']):
memories['rom']['contents'].append('itim')
elif regions_overlap(regions['ram'], regions['itim']):
memories['ram']['contents'].append('itim')
elif (regions['itim'] is None):
memories['ram']['contents'].append('itim')
else:
memories['itim'] = {'name': 'itim', 'base': regions['itim']['base'], 'length': regions['itim']['length'], 'contents': ['itim'], 'path': regions['itim']['node'].get_path()}
return memories | def invert_regions_to_memories(regions):
'Given the requested regions with computed effective address ranges,\n invert the data structure to get the list of memories for the\n linker script'
memories = dict()
if regions_overlap(regions['ram'], regions['entry']):
memories['testram'] = {'name': 'testram', 'base': regions['ram']['base'], 'length': regions['ram']['length'], 'contents': ['ram', 'entry'], 'path': regions['ram']['node'].get_path()}
if regions_overlap(regions['itim'], regions['entry']):
memories['testram']['contents'].append('itim')
elif (regions['itim'] is not None):
memories['itim'] = {'name': 'itim', 'base': regions['itim']['base'], 'length': regions['itim']['length'], 'contents': ['itim'], 'path': regions['itim']['node'].get_path()}
else:
memories['rom'] = {'name': 'rom', 'base': regions['entry']['base'], 'length': regions['entry']['length'], 'contents': ['entry'], 'path': regions['entry']['node'].get_path()}
memories['ram'] = {'name': 'ram', 'base': regions['ram']['base'], 'length': regions['ram']['length'], 'contents': ['ram'], 'path': regions['ram']['node'].get_path()}
if regions_overlap(regions['entry'], regions['itim']):
memories['rom']['contents'].append('itim')
elif regions_overlap(regions['ram'], regions['itim']):
memories['ram']['contents'].append('itim')
elif (regions['itim'] is None):
memories['ram']['contents'].append('itim')
else:
memories['itim'] = {'name': 'itim', 'base': regions['itim']['base'], 'length': regions['itim']['length'], 'contents': ['itim'], 'path': regions['itim']['node'].get_path()}
return memories<|docstring|>Given the requested regions with computed effective address ranges,
invert the data structure to get the list of memories for the
linker script<|endoftext|> |
caacf878db8004eb5ee6804ca378e860c4035acf77cd28973f8fd3ab6e5d0b95 | def attributes_from_contents(contents):
'Get the attributes from the contents of the memory'
attributes = ''
if ('entry' in contents):
attributes += 'rxi'
if ('ram' in contents):
attributes += 'rwa'
if ('itim' in contents):
attributes += 'rwxai'
attributes = ''.join(sorted(list(set(attributes))))
antiattributes = ''
for char in 'rwxai':
if (char not in attributes):
antiattributes += char
if (antiattributes != ''):
attributes += ('!' + antiattributes)
return attributes | Get the attributes from the contents of the memory | memory_map.py | attributes_from_contents | sifive/metal-depend | 0 | python | def attributes_from_contents(contents):
attributes =
if ('entry' in contents):
attributes += 'rxi'
if ('ram' in contents):
attributes += 'rwa'
if ('itim' in contents):
attributes += 'rwxai'
attributes = .join(sorted(list(set(attributes))))
antiattributes =
for char in 'rwxai':
if (char not in attributes):
antiattributes += char
if (antiattributes != ):
attributes += ('!' + antiattributes)
return attributes | def attributes_from_contents(contents):
attributes =
if ('entry' in contents):
attributes += 'rxi'
if ('ram' in contents):
attributes += 'rwa'
if ('itim' in contents):
attributes += 'rwxai'
attributes = .join(sorted(list(set(attributes))))
antiattributes =
for char in 'rwxai':
if (char not in attributes):
antiattributes += char
if (antiattributes != ):
attributes += ('!' + antiattributes)
return attributes<|docstring|>Get the attributes from the contents of the memory<|endoftext|> |
962f0d094412dd1d9b64fca9693c81e7a3bbd09b690f488a2dded71106f8e034 | def compute_attributes(memories):
'Given the list of memories and their contents, compute the linker\n script attributes'
for (_, memory) in memories.items():
memory['attributes'] = attributes_from_contents(memory['contents']) | Given the list of memories and their contents, compute the linker
script attributes | memory_map.py | compute_attributes | sifive/metal-depend | 0 | python | def compute_attributes(memories):
'Given the list of memories and their contents, compute the linker\n script attributes'
for (_, memory) in memories.items():
memory['attributes'] = attributes_from_contents(memory['contents']) | def compute_attributes(memories):
'Given the list of memories and their contents, compute the linker\n script attributes'
for (_, memory) in memories.items():
memory['attributes'] = attributes_from_contents(memory['contents'])<|docstring|>Given the list of memories and their contents, compute the linker
script attributes<|endoftext|> |
c9c0f58f4b129acde5f778a4fa9215ac8706b4d26d05657c0ea38d090ce677e0 | def format_hex(memories):
'Provide hex-formatted base and length for parameterizing template'
for (_, memory) in memories.items():
memory['base_hex'] = ('0x%x' % memory['base'])
memory['length_hex'] = ('0x%x' % memory['length']) | Provide hex-formatted base and length for parameterizing template | memory_map.py | format_hex | sifive/metal-depend | 0 | python | def format_hex(memories):
for (_, memory) in memories.items():
memory['base_hex'] = ('0x%x' % memory['base'])
memory['length_hex'] = ('0x%x' % memory['length']) | def format_hex(memories):
for (_, memory) in memories.items():
memory['base_hex'] = ('0x%x' % memory['base'])
memory['length_hex'] = ('0x%x' % memory['length'])<|docstring|>Provide hex-formatted base and length for parameterizing template<|endoftext|> |
327e4948fa96c47e8ffd9bcf8085b055bf4d49546bff39b0c8811c2cad680cb2 | def pad_to_max_length(self, max_seq_length, pad_token_id):
'Pad the feature vectors so that they all have max_seq_length.\n\n Args:\n max_seq_length: The length that features will have after padding.\n pad_token_id: input_ids feature is padded with this ID, other features\n with ID 0.\n '
pad_len = (max_seq_length - len(self.features['input_ids']))
for key in self.features:
pad_id = (pad_token_id if (key == 'input_ids') else 0)
self.features[key].extend(([pad_id] * pad_len))
if (len(self.features[key]) != max_seq_length):
raise ValueError('{} has length {} (should be {}).'.format(key, len(self.features[key]), max_seq_length)) | Pad the feature vectors so that they all have max_seq_length.
Args:
max_seq_length: The length that features will have after padding.
pad_token_id: input_ids feature is padded with this ID, other features
with ID 0. | bert_example.py | pad_to_max_length | tejvi-m/lasertagger | 592 | python | def pad_to_max_length(self, max_seq_length, pad_token_id):
'Pad the feature vectors so that they all have max_seq_length.\n\n Args:\n max_seq_length: The length that features will have after padding.\n pad_token_id: input_ids feature is padded with this ID, other features\n with ID 0.\n '
pad_len = (max_seq_length - len(self.features['input_ids']))
for key in self.features:
pad_id = (pad_token_id if (key == 'input_ids') else 0)
self.features[key].extend(([pad_id] * pad_len))
if (len(self.features[key]) != max_seq_length):
raise ValueError('{} has length {} (should be {}).'.format(key, len(self.features[key]), max_seq_length)) | def pad_to_max_length(self, max_seq_length, pad_token_id):
'Pad the feature vectors so that they all have max_seq_length.\n\n Args:\n max_seq_length: The length that features will have after padding.\n pad_token_id: input_ids feature is padded with this ID, other features\n with ID 0.\n '
pad_len = (max_seq_length - len(self.features['input_ids']))
for key in self.features:
pad_id = (pad_token_id if (key == 'input_ids') else 0)
self.features[key].extend(([pad_id] * pad_len))
if (len(self.features[key]) != max_seq_length):
raise ValueError('{} has length {} (should be {}).'.format(key, len(self.features[key]), max_seq_length))<|docstring|>Pad the feature vectors so that they all have max_seq_length.
Args:
max_seq_length: The length that features will have after padding.
pad_token_id: input_ids feature is padded with this ID, other features
with ID 0.<|endoftext|> |
fefb2768e027bdbbedf596e729a12f813a82de16c9b22f6d2f3f597986c97691 | def to_tf_example(self):
'Returns this object as a tf.Example.'
def int_feature(values):
return tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
tf_features = collections.OrderedDict([(key, int_feature(val)) for (key, val) in self.features.items()])
return tf.train.Example(features=tf.train.Features(feature=tf_features)) | Returns this object as a tf.Example. | bert_example.py | to_tf_example | tejvi-m/lasertagger | 592 | python | def to_tf_example(self):
def int_feature(values):
return tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
tf_features = collections.OrderedDict([(key, int_feature(val)) for (key, val) in self.features.items()])
return tf.train.Example(features=tf.train.Features(feature=tf_features)) | def to_tf_example(self):
def int_feature(values):
return tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
tf_features = collections.OrderedDict([(key, int_feature(val)) for (key, val) in self.features.items()])
return tf.train.Example(features=tf.train.Features(feature=tf_features))<|docstring|>Returns this object as a tf.Example.<|endoftext|> |
bb5a66e9ecbb1bc97bbccbe9c3e117acf55a31370d540cd3f8ab7ada34d22d51 | def get_token_labels(self):
'Returns labels/tags for the original tokens, not for wordpieces.'
labels = []
for idx in self._token_start_indices:
if ((idx < len(self.features['labels'])) and self.features['labels_mask'][idx]):
labels.append(self.features['labels'][idx])
else:
labels.append(self._default_label)
return labels | Returns labels/tags for the original tokens, not for wordpieces. | bert_example.py | get_token_labels | tejvi-m/lasertagger | 592 | python | def get_token_labels(self):
labels = []
for idx in self._token_start_indices:
if ((idx < len(self.features['labels'])) and self.features['labels_mask'][idx]):
labels.append(self.features['labels'][idx])
else:
labels.append(self._default_label)
return labels | def get_token_labels(self):
labels = []
for idx in self._token_start_indices:
if ((idx < len(self.features['labels'])) and self.features['labels_mask'][idx]):
labels.append(self.features['labels'][idx])
else:
labels.append(self._default_label)
return labels<|docstring|>Returns labels/tags for the original tokens, not for wordpieces.<|endoftext|> |
1f4da86fc21653febf16d7682abb18127212074cc03d2a8b0bca629c37ad1b84 | def __init__(self, label_map, vocab_file, max_seq_length, do_lower_case, converter):
'Initializes an instance of BertExampleBuilder.\n\n Args:\n label_map: Mapping from tags to tag IDs.\n vocab_file: Path to BERT vocabulary file.\n max_seq_length: Maximum sequence length.\n do_lower_case: Whether to lower case the input text. Should be True for\n uncased models and False for cased models.\n converter: Converter from text targets to tags.\n '
self._label_map = label_map
self._tokenizer = tokenization.FullTokenizer(vocab_file, do_lower_case=do_lower_case)
self._max_seq_length = max_seq_length
self._converter = converter
self._pad_id = self._get_pad_id()
self._keep_tag_id = self._label_map['KEEP'] | Initializes an instance of BertExampleBuilder.
Args:
label_map: Mapping from tags to tag IDs.
vocab_file: Path to BERT vocabulary file.
max_seq_length: Maximum sequence length.
do_lower_case: Whether to lower case the input text. Should be True for
uncased models and False for cased models.
converter: Converter from text targets to tags. | bert_example.py | __init__ | tejvi-m/lasertagger | 592 | python | def __init__(self, label_map, vocab_file, max_seq_length, do_lower_case, converter):
'Initializes an instance of BertExampleBuilder.\n\n Args:\n label_map: Mapping from tags to tag IDs.\n vocab_file: Path to BERT vocabulary file.\n max_seq_length: Maximum sequence length.\n do_lower_case: Whether to lower case the input text. Should be True for\n uncased models and False for cased models.\n converter: Converter from text targets to tags.\n '
self._label_map = label_map
self._tokenizer = tokenization.FullTokenizer(vocab_file, do_lower_case=do_lower_case)
self._max_seq_length = max_seq_length
self._converter = converter
self._pad_id = self._get_pad_id()
self._keep_tag_id = self._label_map['KEEP'] | def __init__(self, label_map, vocab_file, max_seq_length, do_lower_case, converter):
'Initializes an instance of BertExampleBuilder.\n\n Args:\n label_map: Mapping from tags to tag IDs.\n vocab_file: Path to BERT vocabulary file.\n max_seq_length: Maximum sequence length.\n do_lower_case: Whether to lower case the input text. Should be True for\n uncased models and False for cased models.\n converter: Converter from text targets to tags.\n '
self._label_map = label_map
self._tokenizer = tokenization.FullTokenizer(vocab_file, do_lower_case=do_lower_case)
self._max_seq_length = max_seq_length
self._converter = converter
self._pad_id = self._get_pad_id()
self._keep_tag_id = self._label_map['KEEP']<|docstring|>Initializes an instance of BertExampleBuilder.
Args:
label_map: Mapping from tags to tag IDs.
vocab_file: Path to BERT vocabulary file.
max_seq_length: Maximum sequence length.
do_lower_case: Whether to lower case the input text. Should be True for
uncased models and False for cased models.
converter: Converter from text targets to tags.<|endoftext|> |
cc6c7559993523fdcbc4c3d3d041bb837598235d88b0844a972c2e36085ea3cb | def build_bert_example(self, sources, target=None, use_arbitrary_target_ids_for_infeasible_examples=False):
"Constructs a BERT Example.\n\n Args:\n sources: List of source texts.\n target: Target text or None when building an example during inference.\n use_arbitrary_target_ids_for_infeasible_examples: Whether to build an\n example with arbitrary target ids even if the target can't be obtained\n via tagging.\n\n Returns:\n BertExample, or None if the conversion from text to tags was infeasible\n and use_arbitrary_target_ids_for_infeasible_examples == False.\n "
task = tagging.EditingTask(sources)
if (target is not None):
tags = self._converter.compute_tags(task, target)
if (not tags):
if use_arbitrary_target_ids_for_infeasible_examples:
tags = [(tagging.Tag('KEEP') if ((i % 2) == 0) else tagging.Tag('DELETE')) for (i, _) in enumerate(task.source_tokens)]
else:
return None
else:
tags = [tagging.Tag('KEEP') for _ in task.source_tokens]
labels = [self._label_map[str(tag)] for tag in tags]
(tokens, labels, token_start_indices) = self._split_to_wordpieces(task.source_tokens, labels)
tokens = self._truncate_list(tokens)
labels = self._truncate_list(labels)
input_tokens = ((['[CLS]'] + tokens) + ['[SEP]'])
labels_mask = (([0] + ([1] * len(labels))) + [0])
labels = (([0] + labels) + [0])
input_ids = self._tokenizer.convert_tokens_to_ids(input_tokens)
input_mask = ([1] * len(input_ids))
segment_ids = ([0] * len(input_ids))
example = BertExample(input_ids=input_ids, input_mask=input_mask, segment_ids=segment_ids, labels=labels, labels_mask=labels_mask, token_start_indices=token_start_indices, task=task, default_label=self._keep_tag_id)
example.pad_to_max_length(self._max_seq_length, self._pad_id)
return example | Constructs a BERT Example.
Args:
sources: List of source texts.
target: Target text or None when building an example during inference.
use_arbitrary_target_ids_for_infeasible_examples: Whether to build an
example with arbitrary target ids even if the target can't be obtained
via tagging.
Returns:
BertExample, or None if the conversion from text to tags was infeasible
and use_arbitrary_target_ids_for_infeasible_examples == False. | bert_example.py | build_bert_example | tejvi-m/lasertagger | 592 | python | def build_bert_example(self, sources, target=None, use_arbitrary_target_ids_for_infeasible_examples=False):
"Constructs a BERT Example.\n\n Args:\n sources: List of source texts.\n target: Target text or None when building an example during inference.\n use_arbitrary_target_ids_for_infeasible_examples: Whether to build an\n example with arbitrary target ids even if the target can't be obtained\n via tagging.\n\n Returns:\n BertExample, or None if the conversion from text to tags was infeasible\n and use_arbitrary_target_ids_for_infeasible_examples == False.\n "
task = tagging.EditingTask(sources)
if (target is not None):
tags = self._converter.compute_tags(task, target)
if (not tags):
if use_arbitrary_target_ids_for_infeasible_examples:
tags = [(tagging.Tag('KEEP') if ((i % 2) == 0) else tagging.Tag('DELETE')) for (i, _) in enumerate(task.source_tokens)]
else:
return None
else:
tags = [tagging.Tag('KEEP') for _ in task.source_tokens]
labels = [self._label_map[str(tag)] for tag in tags]
(tokens, labels, token_start_indices) = self._split_to_wordpieces(task.source_tokens, labels)
tokens = self._truncate_list(tokens)
labels = self._truncate_list(labels)
input_tokens = ((['[CLS]'] + tokens) + ['[SEP]'])
labels_mask = (([0] + ([1] * len(labels))) + [0])
labels = (([0] + labels) + [0])
input_ids = self._tokenizer.convert_tokens_to_ids(input_tokens)
input_mask = ([1] * len(input_ids))
segment_ids = ([0] * len(input_ids))
example = BertExample(input_ids=input_ids, input_mask=input_mask, segment_ids=segment_ids, labels=labels, labels_mask=labels_mask, token_start_indices=token_start_indices, task=task, default_label=self._keep_tag_id)
example.pad_to_max_length(self._max_seq_length, self._pad_id)
return example | def build_bert_example(self, sources, target=None, use_arbitrary_target_ids_for_infeasible_examples=False):
"Constructs a BERT Example.\n\n Args:\n sources: List of source texts.\n target: Target text or None when building an example during inference.\n use_arbitrary_target_ids_for_infeasible_examples: Whether to build an\n example with arbitrary target ids even if the target can't be obtained\n via tagging.\n\n Returns:\n BertExample, or None if the conversion from text to tags was infeasible\n and use_arbitrary_target_ids_for_infeasible_examples == False.\n "
task = tagging.EditingTask(sources)
if (target is not None):
tags = self._converter.compute_tags(task, target)
if (not tags):
if use_arbitrary_target_ids_for_infeasible_examples:
tags = [(tagging.Tag('KEEP') if ((i % 2) == 0) else tagging.Tag('DELETE')) for (i, _) in enumerate(task.source_tokens)]
else:
return None
else:
tags = [tagging.Tag('KEEP') for _ in task.source_tokens]
labels = [self._label_map[str(tag)] for tag in tags]
(tokens, labels, token_start_indices) = self._split_to_wordpieces(task.source_tokens, labels)
tokens = self._truncate_list(tokens)
labels = self._truncate_list(labels)
input_tokens = ((['[CLS]'] + tokens) + ['[SEP]'])
labels_mask = (([0] + ([1] * len(labels))) + [0])
labels = (([0] + labels) + [0])
input_ids = self._tokenizer.convert_tokens_to_ids(input_tokens)
input_mask = ([1] * len(input_ids))
segment_ids = ([0] * len(input_ids))
example = BertExample(input_ids=input_ids, input_mask=input_mask, segment_ids=segment_ids, labels=labels, labels_mask=labels_mask, token_start_indices=token_start_indices, task=task, default_label=self._keep_tag_id)
example.pad_to_max_length(self._max_seq_length, self._pad_id)
return example<|docstring|>Constructs a BERT Example.
Args:
sources: List of source texts.
target: Target text or None when building an example during inference.
use_arbitrary_target_ids_for_infeasible_examples: Whether to build an
example with arbitrary target ids even if the target can't be obtained
via tagging.
Returns:
BertExample, or None if the conversion from text to tags was infeasible
and use_arbitrary_target_ids_for_infeasible_examples == False.<|endoftext|> |
1b2444e98cb3dd59d47b2f5a2808f8f57e4ac7d35a5248ec2124fd9150813479 | def _split_to_wordpieces(self, tokens, labels):
'Splits tokens (and the labels accordingly) to WordPieces.\n\n Args:\n tokens: Tokens to be split.\n labels: Labels (one per token) to be split.\n\n Returns:\n 3-tuple with the split tokens, split labels, and the indices of the\n WordPieces that start a token.\n '
bert_tokens = []
bert_labels = []
token_start_indices = []
for (i, token) in enumerate(tokens):
token_start_indices.append((len(bert_tokens) + 1))
pieces = self._tokenizer.tokenize(token)
bert_tokens.extend(pieces)
bert_labels.extend(([labels[i]] * len(pieces)))
return (bert_tokens, bert_labels, token_start_indices) | Splits tokens (and the labels accordingly) to WordPieces.
Args:
tokens: Tokens to be split.
labels: Labels (one per token) to be split.
Returns:
3-tuple with the split tokens, split labels, and the indices of the
WordPieces that start a token. | bert_example.py | _split_to_wordpieces | tejvi-m/lasertagger | 592 | python | def _split_to_wordpieces(self, tokens, labels):
'Splits tokens (and the labels accordingly) to WordPieces.\n\n Args:\n tokens: Tokens to be split.\n labels: Labels (one per token) to be split.\n\n Returns:\n 3-tuple with the split tokens, split labels, and the indices of the\n WordPieces that start a token.\n '
bert_tokens = []
bert_labels = []
token_start_indices = []
for (i, token) in enumerate(tokens):
token_start_indices.append((len(bert_tokens) + 1))
pieces = self._tokenizer.tokenize(token)
bert_tokens.extend(pieces)
bert_labels.extend(([labels[i]] * len(pieces)))
return (bert_tokens, bert_labels, token_start_indices) | def _split_to_wordpieces(self, tokens, labels):
'Splits tokens (and the labels accordingly) to WordPieces.\n\n Args:\n tokens: Tokens to be split.\n labels: Labels (one per token) to be split.\n\n Returns:\n 3-tuple with the split tokens, split labels, and the indices of the\n WordPieces that start a token.\n '
bert_tokens = []
bert_labels = []
token_start_indices = []
for (i, token) in enumerate(tokens):
token_start_indices.append((len(bert_tokens) + 1))
pieces = self._tokenizer.tokenize(token)
bert_tokens.extend(pieces)
bert_labels.extend(([labels[i]] * len(pieces)))
return (bert_tokens, bert_labels, token_start_indices)<|docstring|>Splits tokens (and the labels accordingly) to WordPieces.
Args:
tokens: Tokens to be split.
labels: Labels (one per token) to be split.
Returns:
3-tuple with the split tokens, split labels, and the indices of the
WordPieces that start a token.<|endoftext|> |
8d1626a694af7a2dd38de405c38b5141531ff249893a478fd0c610062f2da0ab | def _truncate_list(self, x):
'Returns truncated version of x according to the self._max_seq_length.'
return x[:(self._max_seq_length - 2)] | Returns truncated version of x according to the self._max_seq_length. | bert_example.py | _truncate_list | tejvi-m/lasertagger | 592 | python | def _truncate_list(self, x):
return x[:(self._max_seq_length - 2)] | def _truncate_list(self, x):
return x[:(self._max_seq_length - 2)]<|docstring|>Returns truncated version of x according to the self._max_seq_length.<|endoftext|> |
8cdaec4ed04916ed63252ff26939995e849b5508ecaaea453c233006a4cfed25 | def _get_pad_id(self):
"Returns the ID of the [PAD] token (or 0 if it's not in the vocab)."
try:
return self._tokenizer.convert_tokens_to_ids(['[PAD]'])[0]
except KeyError:
return 0 | Returns the ID of the [PAD] token (or 0 if it's not in the vocab). | bert_example.py | _get_pad_id | tejvi-m/lasertagger | 592 | python | def _get_pad_id(self):
try:
return self._tokenizer.convert_tokens_to_ids(['[PAD]'])[0]
except KeyError:
return 0 | def _get_pad_id(self):
try:
return self._tokenizer.convert_tokens_to_ids(['[PAD]'])[0]
except KeyError:
return 0<|docstring|>Returns the ID of the [PAD] token (or 0 if it's not in the vocab).<|endoftext|> |
631590e318a7d96aa37f016598d8f0f1bd742aedec8e94c011b2407cc2c5c8c0 | def load(file, *, bitmap=None, palette=None):
'Loads a bmp image from the open ``file``.\n\n Returns tuple of bitmap object and palette object.\n\n :param object bitmap: Type to store bitmap data. Must have API similar to `displayio.Bitmap`.\n Will be skipped if None\n :param object palette: Type to store the palette. Must have API similar to\n `displayio.Palette`. Will be skipped if None'
file.seek(10)
data_start = int.from_bytes(file.read(4), 'little')
file.seek(18)
width = int.from_bytes(file.read(4), 'little')
try:
height = int.from_bytes(file.read(4), 'little')
except OverflowError as error:
raise NotImplementedError('Negative height BMP files are not supported on builds without longint') from error
file.seek(28)
color_depth = int.from_bytes(file.read(2), 'little')
file.seek(30)
compression = int.from_bytes(file.read(2), 'little')
file.seek(46)
colors = int.from_bytes(file.read(4), 'little')
if ((colors == 0) and (color_depth >= 16)):
raise NotImplementedError('True color BMP unsupported')
if (compression > 2):
raise NotImplementedError('bitmask compression unsupported')
if (colors == 0):
colors = (2 ** color_depth)
from . import indexed
return indexed.load(file, width, height, data_start, colors, color_depth, compression, bitmap=bitmap, palette=palette) | Loads a bmp image from the open ``file``.
Returns tuple of bitmap object and palette object.
:param object bitmap: Type to store bitmap data. Must have API similar to `displayio.Bitmap`.
Will be skipped if None
:param object palette: Type to store the palette. Must have API similar to
`displayio.Palette`. Will be skipped if None | lib/adafruit_imageload/bmp/__init__.py | load | jacoblb64/pico_rgb_keypad_hid | 47 | python | def load(file, *, bitmap=None, palette=None):
'Loads a bmp image from the open ``file``.\n\n Returns tuple of bitmap object and palette object.\n\n :param object bitmap: Type to store bitmap data. Must have API similar to `displayio.Bitmap`.\n Will be skipped if None\n :param object palette: Type to store the palette. Must have API similar to\n `displayio.Palette`. Will be skipped if None'
file.seek(10)
data_start = int.from_bytes(file.read(4), 'little')
file.seek(18)
width = int.from_bytes(file.read(4), 'little')
try:
height = int.from_bytes(file.read(4), 'little')
except OverflowError as error:
raise NotImplementedError('Negative height BMP files are not supported on builds without longint') from error
file.seek(28)
color_depth = int.from_bytes(file.read(2), 'little')
file.seek(30)
compression = int.from_bytes(file.read(2), 'little')
file.seek(46)
colors = int.from_bytes(file.read(4), 'little')
if ((colors == 0) and (color_depth >= 16)):
raise NotImplementedError('True color BMP unsupported')
if (compression > 2):
raise NotImplementedError('bitmask compression unsupported')
if (colors == 0):
colors = (2 ** color_depth)
from . import indexed
return indexed.load(file, width, height, data_start, colors, color_depth, compression, bitmap=bitmap, palette=palette) | def load(file, *, bitmap=None, palette=None):
'Loads a bmp image from the open ``file``.\n\n Returns tuple of bitmap object and palette object.\n\n :param object bitmap: Type to store bitmap data. Must have API similar to `displayio.Bitmap`.\n Will be skipped if None\n :param object palette: Type to store the palette. Must have API similar to\n `displayio.Palette`. Will be skipped if None'
file.seek(10)
data_start = int.from_bytes(file.read(4), 'little')
file.seek(18)
width = int.from_bytes(file.read(4), 'little')
try:
height = int.from_bytes(file.read(4), 'little')
except OverflowError as error:
raise NotImplementedError('Negative height BMP files are not supported on builds without longint') from error
file.seek(28)
color_depth = int.from_bytes(file.read(2), 'little')
file.seek(30)
compression = int.from_bytes(file.read(2), 'little')
file.seek(46)
colors = int.from_bytes(file.read(4), 'little')
if ((colors == 0) and (color_depth >= 16)):
raise NotImplementedError('True color BMP unsupported')
if (compression > 2):
raise NotImplementedError('bitmask compression unsupported')
if (colors == 0):
colors = (2 ** color_depth)
from . import indexed
return indexed.load(file, width, height, data_start, colors, color_depth, compression, bitmap=bitmap, palette=palette)<|docstring|>Loads a bmp image from the open ``file``.
Returns tuple of bitmap object and palette object.
:param object bitmap: Type to store bitmap data. Must have API similar to `displayio.Bitmap`.
Will be skipped if None
:param object palette: Type to store the palette. Must have API similar to
`displayio.Palette`. Will be skipped if None<|endoftext|> |
fbe77fbd5b9481e699ab291da9f23c1393c0a45ca7031f1953381f4b6e896110 | def recupdate(d, u):
'TODO\n\t'
for (k, v) in u.iteritems():
if isinstance(v, collections.Mapping):
r = recupdate(d.get(k, {}), v)
d[k] = r
else:
d[k] = u[k]
return d | TODO | test/test.py | recupdate | aidanhs/shutit | 2 | python | def recupdate(d, u):
'\n\t'
for (k, v) in u.iteritems():
if isinstance(v, collections.Mapping):
r = recupdate(d.get(k, {}), v)
d[k] = r
else:
d[k] = u[k]
return d | def recupdate(d, u):
'\n\t'
for (k, v) in u.iteritems():
if isinstance(v, collections.Mapping):
r = recupdate(d.get(k, {}), v)
d[k] = r
else:
d[k] = u[k]
return d<|docstring|>TODO<|endoftext|> |
f2183b1db34714dc1464a91c64bc409c66adbceb0f1e8ad59dd1f9de56e41128 | def setUp(self):
'TODO\n\t\t'
self.shutit = shutit_global.init()
def noop(*args, **kwargs):
pass
def fail(*args, **kwargs):
raise ShutItTestException('failed')
self.shutit.log = noop
self.shutit.fail = fail
self.shutit.get_default_child = noop
recupdate(self.shutit.cfg, {'build': {'tutorial': False, 'debug': False, 'show_depgraph_only': False, 'interactive': 0}, 'host': {'shutit_module_path': 'dummy1:dummy2'}}) | TODO | test/test.py | setUp | aidanhs/shutit | 2 | python | def setUp(self):
'\n\t\t'
self.shutit = shutit_global.init()
def noop(*args, **kwargs):
pass
def fail(*args, **kwargs):
raise ShutItTestException('failed')
self.shutit.log = noop
self.shutit.fail = fail
self.shutit.get_default_child = noop
recupdate(self.shutit.cfg, {'build': {'tutorial': False, 'debug': False, 'show_depgraph_only': False, 'interactive': 0}, 'host': {'shutit_module_path': 'dummy1:dummy2'}}) | def setUp(self):
'\n\t\t'
self.shutit = shutit_global.init()
def noop(*args, **kwargs):
pass
def fail(*args, **kwargs):
raise ShutItTestException('failed')
self.shutit.log = noop
self.shutit.fail = fail
self.shutit.get_default_child = noop
recupdate(self.shutit.cfg, {'build': {'tutorial': False, 'debug': False, 'show_depgraph_only': False, 'interactive': 0}, 'host': {'shutit_module_path': 'dummy1:dummy2'}})<|docstring|>TODO<|endoftext|> |
bdf1f6088e059c26d402314dc4290ba8f9afd49b29ede94e752feee6acab889f | def test_dep_exists_err(self):
'TODO\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.1, depends_on=['tk.shutit.test0'])}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 1)
self.assertEqual(len(errs[0]), 1) | TODO | test/test.py | test_dep_exists_err | aidanhs/shutit | 2 | python | def test_dep_exists_err(self):
'\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.1, depends_on=['tk.shutit.test0'])}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 1)
self.assertEqual(len(errs[0]), 1) | def test_dep_exists_err(self):
'\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.1, depends_on=['tk.shutit.test0'])}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 1)
self.assertEqual(len(errs[0]), 1)<|docstring|>TODO<|endoftext|> |
47387bf3d4c03a5c51b1573ada9ac122d8b7e058a6e53b954365725d9393f790 | def test_dep_build_err(self):
'TODO\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': False, 'shutit.core.module.build_ifneeded': False, 'remove': False}, 'tk.shutit.test2': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test2': Bunch(module_id='tk.shutit.test2', run_order=1.2, depends_on=['tk.shutit.test1']), 'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.1, depends_on=[], is_installed=(lambda c: False))}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 1)
self.assertEqual(len(errs[0]), 1) | TODO | test/test.py | test_dep_build_err | aidanhs/shutit | 2 | python | def test_dep_build_err(self):
'\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': False, 'shutit.core.module.build_ifneeded': False, 'remove': False}, 'tk.shutit.test2': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test2': Bunch(module_id='tk.shutit.test2', run_order=1.2, depends_on=['tk.shutit.test1']), 'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.1, depends_on=[], is_installed=(lambda c: False))}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 1)
self.assertEqual(len(errs[0]), 1) | def test_dep_build_err(self):
'\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': False, 'shutit.core.module.build_ifneeded': False, 'remove': False}, 'tk.shutit.test2': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test2': Bunch(module_id='tk.shutit.test2', run_order=1.2, depends_on=['tk.shutit.test1']), 'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.1, depends_on=[], is_installed=(lambda c: False))}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 1)
self.assertEqual(len(errs[0]), 1)<|docstring|>TODO<|endoftext|> |
81857805f73c2f9e6c4c7d6102cd4f952443c3ef22ee57a8d227b22e1a9ba710 | def test_dep_order_err(self):
'TODO\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': True, 'remove': False}, 'tk.shutit.test2': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test2': Bunch(module_id='tk.shutit.test2', run_order=1.2, depends_on=['tk.shutit.test1']), 'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.9, depends_on=[])}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 1)
self.assertEqual(len(errs[0]), 1) | TODO | test/test.py | test_dep_order_err | aidanhs/shutit | 2 | python | def test_dep_order_err(self):
'\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': True, 'remove': False}, 'tk.shutit.test2': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test2': Bunch(module_id='tk.shutit.test2', run_order=1.2, depends_on=['tk.shutit.test1']), 'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.9, depends_on=[])}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 1)
self.assertEqual(len(errs[0]), 1) | def test_dep_order_err(self):
'\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': True, 'remove': False}, 'tk.shutit.test2': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test2': Bunch(module_id='tk.shutit.test2', run_order=1.2, depends_on=['tk.shutit.test1']), 'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.9, depends_on=[])}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 1)
self.assertEqual(len(errs[0]), 1)<|docstring|>TODO<|endoftext|> |
b9d8e0a49ab6497f97c9241c05ec711b3a6e767cbfceaa5f9a28bf7c16118011 | def test_dep_resolution(self):
'TODO\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': False, 'shutit.core.module.build_ifneeded': True, 'remove': False}, 'tk.shutit.test2': {'build': False, 'shutit.core.module.build_ifneeded': True, 'remove': False}, 'tk.shutit.test3': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test3': Bunch(module_id='tk.shutit.test3', run_order=1.3, depends_on=['tk.shutit.test2']), 'tk.shutit.test2': Bunch(module_id='tk.shutit.test2', run_order=1.2, depends_on=['tk.shutit.test1']), 'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.1, depends_on=[])}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 0)
assert all([self.shutit.cfg[mod_id]['build'] for mod_id in self.shutit.shutit_map]) | TODO | test/test.py | test_dep_resolution | aidanhs/shutit | 2 | python | def test_dep_resolution(self):
'\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': False, 'shutit.core.module.build_ifneeded': True, 'remove': False}, 'tk.shutit.test2': {'build': False, 'shutit.core.module.build_ifneeded': True, 'remove': False}, 'tk.shutit.test3': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test3': Bunch(module_id='tk.shutit.test3', run_order=1.3, depends_on=['tk.shutit.test2']), 'tk.shutit.test2': Bunch(module_id='tk.shutit.test2', run_order=1.2, depends_on=['tk.shutit.test1']), 'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.1, depends_on=[])}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 0)
assert all([self.shutit.cfg[mod_id]['build'] for mod_id in self.shutit.shutit_map]) | def test_dep_resolution(self):
'\n\t\t'
self.shutit.cfg.update({'tk.shutit.test1': {'build': False, 'shutit.core.module.build_ifneeded': True, 'remove': False}, 'tk.shutit.test2': {'build': False, 'shutit.core.module.build_ifneeded': True, 'remove': False}, 'tk.shutit.test3': {'build': True, 'remove': False}})
self.shutit.shutit_map = {'tk.shutit.test3': Bunch(module_id='tk.shutit.test3', run_order=1.3, depends_on=['tk.shutit.test2']), 'tk.shutit.test2': Bunch(module_id='tk.shutit.test2', run_order=1.2, depends_on=['tk.shutit.test1']), 'tk.shutit.test1': Bunch(module_id='tk.shutit.test1', run_order=1.1, depends_on=[])}
errs = shutit_main.check_deps(self.shutit)
self.assertEqual(len(errs), 0)
assert all([self.shutit.cfg[mod_id]['build'] for mod_id in self.shutit.shutit_map])<|docstring|>TODO<|endoftext|> |
0951ad837b7c470c2c2dde69485c296f231c4364816b9587eab80a7befa8176b | def deleteNode(self, node):
'\n :type node: ListNode\n :rtype: void Do not return anything, modify node in-place instead.\n '
'\n Method 1:\n\n * Traverse through the LinkedList, by constantly\n checking the value of the next element.\n * If the next element is equal to the value of\n the node given, double jump the pointer\n * Assign the val and node to the next.next pointer\n\n Your runtime beats 94.15 % of python3 submissions.\n '
node.val = node.next.val
node.next = node.next.next | :type node: ListNode
:rtype: void Do not return anything, modify node in-place instead. | 00_Code/01_LeetCode/237_DeleteNodeinaLinkedList.py | deleteNode | KartikKannapur/Data_Structures_and_Algorithms_Python | 1 | python | def deleteNode(self, node):
'\n :type node: ListNode\n :rtype: void Do not return anything, modify node in-place instead.\n '
'\n Method 1:\n\n * Traverse through the LinkedList, by constantly\n checking the value of the next element.\n * If the next element is equal to the value of\n the node given, double jump the pointer\n * Assign the val and node to the next.next pointer\n\n Your runtime beats 94.15 % of python3 submissions.\n '
node.val = node.next.val
node.next = node.next.next | def deleteNode(self, node):
'\n :type node: ListNode\n :rtype: void Do not return anything, modify node in-place instead.\n '
'\n Method 1:\n\n * Traverse through the LinkedList, by constantly\n checking the value of the next element.\n * If the next element is equal to the value of\n the node given, double jump the pointer\n * Assign the val and node to the next.next pointer\n\n Your runtime beats 94.15 % of python3 submissions.\n '
node.val = node.next.val
node.next = node.next.next<|docstring|>:type node: ListNode
:rtype: void Do not return anything, modify node in-place instead.<|endoftext|> |
42a013d2d5ff1b8c8ffb9aa2605cc134e9950dfdbd6cea2da57f5986a11c5982 | async def async_setup_platform(hass: HomeAssistant, config: ConfigType, async_add_entities: AddEntitiesCallback, discovery_info: (DiscoveryInfoType | None)=None) -> None:
'Old way of setting up the platform.\n\n Can only be called when a user accidentally mentions the platform in their\n config. But even in that case it would have been ignored.\n ' | Old way of setting up the platform.
Can only be called when a user accidentally mentions the platform in their
config. But even in that case it would have been ignored. | homeassistant/components/daikin/switch.py | async_setup_platform | GrandMoff100/homeassistant-core | 30,023 | python | async def async_setup_platform(hass: HomeAssistant, config: ConfigType, async_add_entities: AddEntitiesCallback, discovery_info: (DiscoveryInfoType | None)=None) -> None:
'Old way of setting up the platform.\n\n Can only be called when a user accidentally mentions the platform in their\n config. But even in that case it would have been ignored.\n ' | async def async_setup_platform(hass: HomeAssistant, config: ConfigType, async_add_entities: AddEntitiesCallback, discovery_info: (DiscoveryInfoType | None)=None) -> None:
'Old way of setting up the platform.\n\n Can only be called when a user accidentally mentions the platform in their\n config. But even in that case it would have been ignored.\n '<|docstring|>Old way of setting up the platform.
Can only be called when a user accidentally mentions the platform in their
config. But even in that case it would have been ignored.<|endoftext|> |
2bdb20805a3cb5e8f1c63d0e6cb4c9eff3e37f7c0cba6cd94db3a3e023eb1d0d | async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry, async_add_entities: AddEntitiesCallback) -> None:
'Set up Daikin climate based on config_entry.'
daikin_api = hass.data[DAIKIN_DOMAIN][entry.entry_id]
switches: list[(DaikinZoneSwitch | DaikinStreamerSwitch)] = []
if (zones := daikin_api.device.zones):
switches.extend([DaikinZoneSwitch(daikin_api, zone_id) for (zone_id, zone) in enumerate(zones) if (zone != ('-', '0'))])
if daikin_api.device.support_advanced_modes:
switches.append(DaikinStreamerSwitch(daikin_api))
if switches:
async_add_entities(switches) | Set up Daikin climate based on config_entry. | homeassistant/components/daikin/switch.py | async_setup_entry | GrandMoff100/homeassistant-core | 30,023 | python | async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry, async_add_entities: AddEntitiesCallback) -> None:
daikin_api = hass.data[DAIKIN_DOMAIN][entry.entry_id]
switches: list[(DaikinZoneSwitch | DaikinStreamerSwitch)] = []
if (zones := daikin_api.device.zones):
switches.extend([DaikinZoneSwitch(daikin_api, zone_id) for (zone_id, zone) in enumerate(zones) if (zone != ('-', '0'))])
if daikin_api.device.support_advanced_modes:
switches.append(DaikinStreamerSwitch(daikin_api))
if switches:
async_add_entities(switches) | async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry, async_add_entities: AddEntitiesCallback) -> None:
daikin_api = hass.data[DAIKIN_DOMAIN][entry.entry_id]
switches: list[(DaikinZoneSwitch | DaikinStreamerSwitch)] = []
if (zones := daikin_api.device.zones):
switches.extend([DaikinZoneSwitch(daikin_api, zone_id) for (zone_id, zone) in enumerate(zones) if (zone != ('-', '0'))])
if daikin_api.device.support_advanced_modes:
switches.append(DaikinStreamerSwitch(daikin_api))
if switches:
async_add_entities(switches)<|docstring|>Set up Daikin climate based on config_entry.<|endoftext|> |
786dc7c94b9f1d717fada956a97088cde84725bc1ff0f8072b650d67c1d7372d | def __init__(self, daikin_api, zone_id):
'Initialize the zone.'
self._api = daikin_api
self._zone_id = zone_id | Initialize the zone. | homeassistant/components/daikin/switch.py | __init__ | GrandMoff100/homeassistant-core | 30,023 | python | def __init__(self, daikin_api, zone_id):
self._api = daikin_api
self._zone_id = zone_id | def __init__(self, daikin_api, zone_id):
self._api = daikin_api
self._zone_id = zone_id<|docstring|>Initialize the zone.<|endoftext|> |
1c9b21f9231725ef7c9555c4b203aa0327c74f03c6acbfd7535dc1d0b55304ad | @property
def unique_id(self):
'Return a unique ID.'
return f'{self._api.device.mac}-zone{self._zone_id}' | Return a unique ID. | homeassistant/components/daikin/switch.py | unique_id | GrandMoff100/homeassistant-core | 30,023 | python | @property
def unique_id(self):
return f'{self._api.device.mac}-zone{self._zone_id}' | @property
def unique_id(self):
return f'{self._api.device.mac}-zone{self._zone_id}'<|docstring|>Return a unique ID.<|endoftext|> |
2f82babec3baa535cf43f3c926105ae380a378109c41e572c25f757d242eae46 | @property
def icon(self):
'Icon to use in the frontend, if any.'
return ZONE_ICON | Icon to use in the frontend, if any. | homeassistant/components/daikin/switch.py | icon | GrandMoff100/homeassistant-core | 30,023 | python | @property
def icon(self):
return ZONE_ICON | @property
def icon(self):
return ZONE_ICON<|docstring|>Icon to use in the frontend, if any.<|endoftext|> |
f6067532d5c3e40f9877674f0272f696c57f9c28fd6fc7a4a41a33495df8a5c0 | @property
def name(self):
'Return the name of the sensor.'
return f'{self._api.name} {self._api.device.zones[self._zone_id][0]}' | Return the name of the sensor. | homeassistant/components/daikin/switch.py | name | GrandMoff100/homeassistant-core | 30,023 | python | @property
def name(self):
return f'{self._api.name} {self._api.device.zones[self._zone_id][0]}' | @property
def name(self):
return f'{self._api.name} {self._api.device.zones[self._zone_id][0]}'<|docstring|>Return the name of the sensor.<|endoftext|> |
667d640d49feaa1ee56d0b55478135813ff286697a903185c60e0dc4b636c8a3 | @property
def is_on(self):
'Return the state of the sensor.'
return (self._api.device.zones[self._zone_id][1] == '1') | Return the state of the sensor. | homeassistant/components/daikin/switch.py | is_on | GrandMoff100/homeassistant-core | 30,023 | python | @property
def is_on(self):
return (self._api.device.zones[self._zone_id][1] == '1') | @property
def is_on(self):
return (self._api.device.zones[self._zone_id][1] == '1')<|docstring|>Return the state of the sensor.<|endoftext|> |
9a38b60885e1d5262da764bc73cf881f0640eaac9b157b80fc7b01581d455164 | @property
def device_info(self):
'Return a device description for device registry.'
return self._api.device_info | Return a device description for device registry. | homeassistant/components/daikin/switch.py | device_info | GrandMoff100/homeassistant-core | 30,023 | python | @property
def device_info(self):
return self._api.device_info | @property
def device_info(self):
return self._api.device_info<|docstring|>Return a device description for device registry.<|endoftext|> |
b1a89076e5c34df8d4b99e7e415c8c643851f299207e2f8918c4b2ad6c5e9ee1 | async def async_update(self):
'Retrieve latest state.'
(await self._api.async_update()) | Retrieve latest state. | homeassistant/components/daikin/switch.py | async_update | GrandMoff100/homeassistant-core | 30,023 | python | async def async_update(self):
(await self._api.async_update()) | async def async_update(self):
(await self._api.async_update())<|docstring|>Retrieve latest state.<|endoftext|> |
9cb85054cc65d04da535a3ae767d3d4af5e9f25154af6f962466b0962c536d95 | async def async_turn_on(self, **kwargs):
'Turn the zone on.'
(await self._api.device.set_zone(self._zone_id, '1')) | Turn the zone on. | homeassistant/components/daikin/switch.py | async_turn_on | GrandMoff100/homeassistant-core | 30,023 | python | async def async_turn_on(self, **kwargs):
(await self._api.device.set_zone(self._zone_id, '1')) | async def async_turn_on(self, **kwargs):
(await self._api.device.set_zone(self._zone_id, '1'))<|docstring|>Turn the zone on.<|endoftext|> |
366db0483bb8036b950b385f64c6c138f0a4ca759472e738d0ae3a6fa2d5bb3c | async def async_turn_off(self, **kwargs):
'Turn the zone off.'
(await self._api.device.set_zone(self._zone_id, '0')) | Turn the zone off. | homeassistant/components/daikin/switch.py | async_turn_off | GrandMoff100/homeassistant-core | 30,023 | python | async def async_turn_off(self, **kwargs):
(await self._api.device.set_zone(self._zone_id, '0')) | async def async_turn_off(self, **kwargs):
(await self._api.device.set_zone(self._zone_id, '0'))<|docstring|>Turn the zone off.<|endoftext|> |
c3f48bd6f8be9efb100264671d62b467f59d8af3cc11163c7440bde781cdbd1c | def __init__(self, daikin_api):
'Initialize streamer switch.'
self._api = daikin_api | Initialize streamer switch. | homeassistant/components/daikin/switch.py | __init__ | GrandMoff100/homeassistant-core | 30,023 | python | def __init__(self, daikin_api):
self._api = daikin_api | def __init__(self, daikin_api):
self._api = daikin_api<|docstring|>Initialize streamer switch.<|endoftext|> |
657b8fbabc1f8f2a97c84b99baf38b6cfe68651dc1b42279910a925eab45f7c8 | @property
def unique_id(self):
'Return a unique ID.'
return f'{self._api.device.mac}-streamer' | Return a unique ID. | homeassistant/components/daikin/switch.py | unique_id | GrandMoff100/homeassistant-core | 30,023 | python | @property
def unique_id(self):
return f'{self._api.device.mac}-streamer' | @property
def unique_id(self):
return f'{self._api.device.mac}-streamer'<|docstring|>Return a unique ID.<|endoftext|> |
c8862940163a2824a7f30b1fcdd19642ba0690344ea5cd5405d456a15b129166 | @property
def icon(self):
'Icon to use in the frontend, if any.'
return STREAMER_ICON | Icon to use in the frontend, if any. | homeassistant/components/daikin/switch.py | icon | GrandMoff100/homeassistant-core | 30,023 | python | @property
def icon(self):
return STREAMER_ICON | @property
def icon(self):
return STREAMER_ICON<|docstring|>Icon to use in the frontend, if any.<|endoftext|> |
ff58d7873d842f285c164f79a518b7667b1ef446043eddec5100953c80d4dae6 | @property
def name(self):
'Return the name of the sensor.'
return f'{self._api.name} streamer' | Return the name of the sensor. | homeassistant/components/daikin/switch.py | name | GrandMoff100/homeassistant-core | 30,023 | python | @property
def name(self):
return f'{self._api.name} streamer' | @property
def name(self):
return f'{self._api.name} streamer'<|docstring|>Return the name of the sensor.<|endoftext|> |
e214d188d0769fd6c6cc114700c8e05dc9e4f4e482cf8525681fd06d6b46c31a | @property
def is_on(self):
'Return the state of the sensor.'
return (DAIKIN_ATTR_STREAMER in self._api.device.represent(DAIKIN_ATTR_ADVANCED)[1]) | Return the state of the sensor. | homeassistant/components/daikin/switch.py | is_on | GrandMoff100/homeassistant-core | 30,023 | python | @property
def is_on(self):
return (DAIKIN_ATTR_STREAMER in self._api.device.represent(DAIKIN_ATTR_ADVANCED)[1]) | @property
def is_on(self):
return (DAIKIN_ATTR_STREAMER in self._api.device.represent(DAIKIN_ATTR_ADVANCED)[1])<|docstring|>Return the state of the sensor.<|endoftext|> |
9a38b60885e1d5262da764bc73cf881f0640eaac9b157b80fc7b01581d455164 | @property
def device_info(self):
'Return a device description for device registry.'
return self._api.device_info | Return a device description for device registry. | homeassistant/components/daikin/switch.py | device_info | GrandMoff100/homeassistant-core | 30,023 | python | @property
def device_info(self):
return self._api.device_info | @property
def device_info(self):
return self._api.device_info<|docstring|>Return a device description for device registry.<|endoftext|> |
b1a89076e5c34df8d4b99e7e415c8c643851f299207e2f8918c4b2ad6c5e9ee1 | async def async_update(self):
'Retrieve latest state.'
(await self._api.async_update()) | Retrieve latest state. | homeassistant/components/daikin/switch.py | async_update | GrandMoff100/homeassistant-core | 30,023 | python | async def async_update(self):
(await self._api.async_update()) | async def async_update(self):
(await self._api.async_update())<|docstring|>Retrieve latest state.<|endoftext|> |
5b1b3b72d21e699138519a932602fa0efc31dff9c97aa5faf083b04122cbcae7 | async def async_turn_on(self, **kwargs):
'Turn the zone on.'
(await self._api.device.set_streamer('on')) | Turn the zone on. | homeassistant/components/daikin/switch.py | async_turn_on | GrandMoff100/homeassistant-core | 30,023 | python | async def async_turn_on(self, **kwargs):
(await self._api.device.set_streamer('on')) | async def async_turn_on(self, **kwargs):
(await self._api.device.set_streamer('on'))<|docstring|>Turn the zone on.<|endoftext|> |
fec191dbd7a2ad63d880eace887d85dcd014f173c2ca77a7bd8a3825628edccf | async def async_turn_off(self, **kwargs):
'Turn the zone off.'
(await self._api.device.set_streamer('off')) | Turn the zone off. | homeassistant/components/daikin/switch.py | async_turn_off | GrandMoff100/homeassistant-core | 30,023 | python | async def async_turn_off(self, **kwargs):
(await self._api.device.set_streamer('off')) | async def async_turn_off(self, **kwargs):
(await self._api.device.set_streamer('off'))<|docstring|>Turn the zone off.<|endoftext|> |
728e445a0f05a898af22921096c517e3e34a262c40be0e151ad56567def0f63c | @classmethod
def delete_stale(cls):
'Delete stale tokens, ie tokens that are more than TOKEN_DURATION seconds older.'
cls.objects.filter(timestamp__lt=(now() - timedelta(seconds=3600))).delete() | Delete stale tokens, ie tokens that are more than TOKEN_DURATION seconds older. | sandbox/clientaddress/models.py | delete_stale | Bastilla123/shop2 | 0 | python | @classmethod
def delete_stale(cls):
cls.objects.filter(timestamp__lt=(now() - timedelta(seconds=3600))).delete() | @classmethod
def delete_stale(cls):
cls.objects.filter(timestamp__lt=(now() - timedelta(seconds=3600))).delete()<|docstring|>Delete stale tokens, ie tokens that are more than TOKEN_DURATION seconds older.<|endoftext|> |
e4eca10db8d47835f47b7e2426e10abc1eaf81b678938be288f47ebd7211ad40 | def find_colocated_sections(self, courses):
' Return colocated/cross-listed sections\n\n Parameters\n ----------\n courses : list of courses\n Set of courses to check for colocated ones\n\n Returns\n -------\n list of courses\n All colocated sections.\n\n '
sections = []
if self.colocated_sections:
for colo in self.colocated_sections:
for course in courses:
if (colo == course.section_def_refid):
sections.append(course)
break
return sections | Return colocated/cross-listed sections
Parameters
----------
courses : list of courses
Set of courses to check for colocated ones
Returns
-------
list of courses
All colocated sections. | lib/course.py | find_colocated_sections | cca/libraries_course_lists2 | 0 | python | def find_colocated_sections(self, courses):
' Return colocated/cross-listed sections\n\n Parameters\n ----------\n courses : list of courses\n Set of courses to check for colocated ones\n\n Returns\n -------\n list of courses\n All colocated sections.\n\n '
sections = []
if self.colocated_sections:
for colo in self.colocated_sections:
for course in courses:
if (colo == course.section_def_refid):
sections.append(course)
break
return sections | def find_colocated_sections(self, courses):
' Return colocated/cross-listed sections\n\n Parameters\n ----------\n courses : list of courses\n Set of courses to check for colocated ones\n\n Returns\n -------\n list of courses\n All colocated sections.\n\n '
sections = []
if self.colocated_sections:
for colo in self.colocated_sections:
for course in courses:
if (colo == course.section_def_refid):
sections.append(course)
break
return sections<|docstring|>Return colocated/cross-listed sections
Parameters
----------
courses : list of courses
Set of courses to check for colocated ones
Returns
-------
list of courses
All colocated sections.<|endoftext|> |
9bcf4a4ba2bf5a9fee82a185611c016deda00749d71d3dbfde61802dd57a1749 | @property
def on_portal(self):
' boolean for whether a course is included in Portal course catalog '
if ((self.hidden != '1') and (self.status in PORTAL_STATUSES) and (self.owner != 'EXTED')):
return True
return False | boolean for whether a course is included in Portal course catalog | lib/course.py | on_portal | cca/libraries_course_lists2 | 0 | python | @property
def on_portal(self):
' '
if ((self.hidden != '1') and (self.status in PORTAL_STATUSES) and (self.owner != 'EXTED')):
return True
return False | @property
def on_portal(self):
' '
if ((self.hidden != '1') and (self.status in PORTAL_STATUSES) and (self.owner != 'EXTED')):
return True
return False<|docstring|>boolean for whether a course is included in Portal course catalog<|endoftext|> |
963fde7230c9187ff36f03fd4aa246aa7f793ac88d089a4c6716b978c1403c7c | def list(self):
'Get all available facilities.\n\n \x0c\n :returns: a list of all Pureport facilities\n :rtype: list\n '
return self.client.find_facilities() | Get all available facilities.
:returns: a list of all Pureport facilities
:rtype: list | pureport_client/commands/facilities/__init__.py | list | pureport/pureport-python-client | 4 | python | def list(self):
'Get all available facilities.\n\n \x0c\n :returns: a list of all Pureport facilities\n :rtype: list\n '
return self.client.find_facilities() | def list(self):
'Get all available facilities.\n\n \x0c\n :returns: a list of all Pureport facilities\n :rtype: list\n '
return self.client.find_facilities()<|docstring|>Get all available facilities.
:returns: a list of all Pureport facilities
:rtype: list<|endoftext|> |
6a5c7eeab69339722ce168277b2ecb5d8fe20de2a0d7dafd3a986b4abd912153 | @argument('facility_id')
def get(self, facility_id):
'Get a facility with the provided facility id.\n\n \x0c\n :param facility_id: the id of the facility to retrieve\n :type facility_id: str\n\n :returns: a facility object\n :rtype: Facility\n '
return self.client.get_facility(facility_id) | Get a facility with the provided facility id.
:param facility_id: the id of the facility to retrieve
:type facility_id: str
:returns: a facility object
:rtype: Facility | pureport_client/commands/facilities/__init__.py | get | pureport/pureport-python-client | 4 | python | @argument('facility_id')
def get(self, facility_id):
'Get a facility with the provided facility id.\n\n \x0c\n :param facility_id: the id of the facility to retrieve\n :type facility_id: str\n\n :returns: a facility object\n :rtype: Facility\n '
return self.client.get_facility(facility_id) | @argument('facility_id')
def get(self, facility_id):
'Get a facility with the provided facility id.\n\n \x0c\n :param facility_id: the id of the facility to retrieve\n :type facility_id: str\n\n :returns: a facility object\n :rtype: Facility\n '
return self.client.get_facility(facility_id)<|docstring|>Get a facility with the provided facility id.
:param facility_id: the id of the facility to retrieve
:type facility_id: str
:returns: a facility object
:rtype: Facility<|endoftext|> |
7621d5cf0a896edf8d0f7a104b2fc3abfe6ce61a8fc58f35afe69433a1a44bd3 | def test_get_about_key():
'Get a key from the about dictionary.'
key = about.get_about_key('Python')
assert (key == platform.python_version()) | Get a key from the about dictionary. | {{cookiecutter.project_slug}}/tests/test_about_widget.py | test_get_about_key | sisoe24/nuke-pyside-template | 0 | python | def test_get_about_key():
key = about.get_about_key('Python')
assert (key == platform.python_version()) | def test_get_about_key():
key = about.get_about_key('Python')
assert (key == platform.python_version())<|docstring|>Get a key from the about dictionary.<|endoftext|> |
eb2e06053d44c09f58f73f2abfd802100fa34f276ba9f38eb6fb591d937571f3 | def test_about_python_version():
'Python version should be 3.7.7 for nuke 13'
version = about.get_about_key('Python')
assert (version <= '3.7.7') | Python version should be 3.7.7 for nuke 13 | {{cookiecutter.project_slug}}/tests/test_about_widget.py | test_about_python_version | sisoe24/nuke-pyside-template | 0 | python | def test_about_python_version():
version = about.get_about_key('Python')
assert (version <= '3.7.7') | def test_about_python_version():
version = about.get_about_key('Python')
assert (version <= '3.7.7')<|docstring|>Python version should be 3.7.7 for nuke 13<|endoftext|> |
c5073db307825cb8a0822ba82a1828b36b49e2a6cbbf42be9a4bda8005e7e803 | def test_get_about_missing_key():
'Get a key that is not present in about dictionary.'
key = about.get_about_key('Maya')
assert (key == '') | Get a key that is not present in about dictionary. | {{cookiecutter.project_slug}}/tests/test_about_widget.py | test_get_about_missing_key | sisoe24/nuke-pyside-template | 0 | python | def test_get_about_missing_key():
key = about.get_about_key('Maya')
assert (key == ) | def test_get_about_missing_key():
key = about.get_about_key('Maya')
assert (key == )<|docstring|>Get a key that is not present in about dictionary.<|endoftext|> |
16a852b44f5bc19622b995690acdeacaa0e6ce4bc8bc2b6d2f0b5ca794c06302 | def test_about_to_string_exclude_key():
'Get the about data in string format and exclude one key.'
keys = about.about_to_string(exclude=['Python'])
assert ('Python' not in keys) | Get the about data in string format and exclude one key. | {{cookiecutter.project_slug}}/tests/test_about_widget.py | test_about_to_string_exclude_key | sisoe24/nuke-pyside-template | 0 | python | def test_about_to_string_exclude_key():
keys = about.about_to_string(exclude=['Python'])
assert ('Python' not in keys) | def test_about_to_string_exclude_key():
keys = about.about_to_string(exclude=['Python'])
assert ('Python' not in keys)<|docstring|>Get the about data in string format and exclude one key.<|endoftext|> |
7884ce897942604be905d54a8540c624981e02877b778fad9ecb05f7d49792af | @pytest.fixture()
def _about_widget(qtbot):
'Initiate about widget class.'
widget = about_widget.AboutWidget()
qtbot.addWidget(widget)
(yield widget) | Initiate about widget class. | {{cookiecutter.project_slug}}/tests/test_about_widget.py | _about_widget | sisoe24/nuke-pyside-template | 0 | python | @pytest.fixture()
def _about_widget(qtbot):
widget = about_widget.AboutWidget()
qtbot.addWidget(widget)
(yield widget) | @pytest.fixture()
def _about_widget(qtbot):
widget = about_widget.AboutWidget()
qtbot.addWidget(widget)
(yield widget)<|docstring|>Initiate about widget class.<|endoftext|> |
9ebc169bd252edb32c516b5ab386ce4bd2e7bc8d2dfd8d1f0c77e057bb234dae | def test_about_form(_about_widget):
'Test if the form layout has the proper about information.'
about_list = []
for label in about.about():
about_list.append(label.label)
about_list.append(label.repr)
for (index, item) in enumerate(about_list):
_widget = _about_widget._form_layout.itemAt(index).widget()
assert (item == _widget.text()) | Test if the form layout has the proper about information. | {{cookiecutter.project_slug}}/tests/test_about_widget.py | test_about_form | sisoe24/nuke-pyside-template | 0 | python | def test_about_form(_about_widget):
about_list = []
for label in about.about():
about_list.append(label.label)
about_list.append(label.repr)
for (index, item) in enumerate(about_list):
_widget = _about_widget._form_layout.itemAt(index).widget()
assert (item == _widget.text()) | def test_about_form(_about_widget):
about_list = []
for label in about.about():
about_list.append(label.label)
about_list.append(label.repr)
for (index, item) in enumerate(about_list):
_widget = _about_widget._form_layout.itemAt(index).widget()
assert (item == _widget.text())<|docstring|>Test if the form layout has the proper about information.<|endoftext|> |
c6d9be21fc51c4ae13744a962c46e8e706e51d9c91d7706701fe6cd27b1e6add | def test_about_grid(_about_widget):
'Test if grid layout has the proper about information.'
for (index, link) in enumerate(LINKS):
_widget = _about_widget._grid_layout.itemAt(index).widget()
assert (_widget.text() == link.label)
assert (_widget.property('link') == link.repr) | Test if grid layout has the proper about information. | {{cookiecutter.project_slug}}/tests/test_about_widget.py | test_about_grid | sisoe24/nuke-pyside-template | 0 | python | def test_about_grid(_about_widget):
for (index, link) in enumerate(LINKS):
_widget = _about_widget._grid_layout.itemAt(index).widget()
assert (_widget.text() == link.label)
assert (_widget.property('link') == link.repr) | def test_about_grid(_about_widget):
for (index, link) in enumerate(LINKS):
_widget = _about_widget._grid_layout.itemAt(index).widget()
assert (_widget.text() == link.label)
assert (_widget.property('link') == link.repr)<|docstring|>Test if grid layout has the proper about information.<|endoftext|> |
f908986589572cc2a5832af878fa394c89bf853d93753c28e58544a910c4f201 | def test_about_buttons(_about_widget):
'Test if about buttons are enabled.'
for (index, _) in enumerate(LINKS):
_widget = _about_widget._grid_layout.itemAt(index).widget()
assert _widget.isEnabled() | Test if about buttons are enabled. | {{cookiecutter.project_slug}}/tests/test_about_widget.py | test_about_buttons | sisoe24/nuke-pyside-template | 0 | python | def test_about_buttons(_about_widget):
for (index, _) in enumerate(LINKS):
_widget = _about_widget._grid_layout.itemAt(index).widget()
assert _widget.isEnabled() | def test_about_buttons(_about_widget):
for (index, _) in enumerate(LINKS):
_widget = _about_widget._grid_layout.itemAt(index).widget()
assert _widget.isEnabled()<|docstring|>Test if about buttons are enabled.<|endoftext|> |
ac4e828db20f40ba69caf551a9f85c5603e623f8704da6bb73c319f5bc54d120 | @pytest.mark.web
@pytest.mark.parametrize('link', LINKS, ids=[i.label for i in LINKS])
def test_about_links(link):
'Test if about link are reachable.'
if (link.label == 'Logs'):
assert os.path.exists(link.repr.replace('file:///', ''))
else:
assert (requests.get(link.repr, allow_redirects=True).status_code == 200) | Test if about link are reachable. | {{cookiecutter.project_slug}}/tests/test_about_widget.py | test_about_links | sisoe24/nuke-pyside-template | 0 | python | @pytest.mark.web
@pytest.mark.parametrize('link', LINKS, ids=[i.label for i in LINKS])
def test_about_links(link):
if (link.label == 'Logs'):
assert os.path.exists(link.repr.replace('file:///', ))
else:
assert (requests.get(link.repr, allow_redirects=True).status_code == 200) | @pytest.mark.web
@pytest.mark.parametrize('link', LINKS, ids=[i.label for i in LINKS])
def test_about_links(link):
if (link.label == 'Logs'):
assert os.path.exists(link.repr.replace('file:///', ))
else:
assert (requests.get(link.repr, allow_redirects=True).status_code == 200)<|docstring|>Test if about link are reachable.<|endoftext|> |
ff11a583c076fe868348ffa8a954405bb60743ded8df6790002e99028f025295 | def get_gms_internet_policy_services(self) -> dict:
"Get configured services used in Overlay editor's Internet Policy\n section.\n\n .. list-table::\n :header-rows: 1\n\n * - Swagger Section\n - Method\n - Endpoint\n * - services\n - GET\n - /gms/services\n\n For every service, a 'Send to ' option will be shown in the\n available options list. The service name will correspond to the peer\n name of the pass through tunnel the internet traffic will go on. It\n is the user's responsibility to create these pass through tunnels on\n appliances.\n\n :return: Returns configured services\n :rtype: dict\n "
return self._get('/gms/services') | Get configured services used in Overlay editor's Internet Policy
section.
.. list-table::
:header-rows: 1
* - Swagger Section
- Method
- Endpoint
* - services
- GET
- /gms/services
For every service, a 'Send to ' option will be shown in the
available options list. The service name will correspond to the peer
name of the pass through tunnel the internet traffic will go on. It
is the user's responsibility to create these pass through tunnels on
appliances.
:return: Returns configured services
:rtype: dict | pyedgeconnect/orch/_services.py | get_gms_internet_policy_services | SPOpenSource/edgeconnect-python | 15 | python | def get_gms_internet_policy_services(self) -> dict:
"Get configured services used in Overlay editor's Internet Policy\n section.\n\n .. list-table::\n :header-rows: 1\n\n * - Swagger Section\n - Method\n - Endpoint\n * - services\n - GET\n - /gms/services\n\n For every service, a 'Send to ' option will be shown in the\n available options list. The service name will correspond to the peer\n name of the pass through tunnel the internet traffic will go on. It\n is the user's responsibility to create these pass through tunnels on\n appliances.\n\n :return: Returns configured services\n :rtype: dict\n "
return self._get('/gms/services') | def get_gms_internet_policy_services(self) -> dict:
"Get configured services used in Overlay editor's Internet Policy\n section.\n\n .. list-table::\n :header-rows: 1\n\n * - Swagger Section\n - Method\n - Endpoint\n * - services\n - GET\n - /gms/services\n\n For every service, a 'Send to ' option will be shown in the\n available options list. The service name will correspond to the peer\n name of the pass through tunnel the internet traffic will go on. It\n is the user's responsibility to create these pass through tunnels on\n appliances.\n\n :return: Returns configured services\n :rtype: dict\n "
return self._get('/gms/services')<|docstring|>Get configured services used in Overlay editor's Internet Policy
section.
.. list-table::
:header-rows: 1
* - Swagger Section
- Method
- Endpoint
* - services
- GET
- /gms/services
For every service, a 'Send to ' option will be shown in the
available options list. The service name will correspond to the peer
name of the pass through tunnel the internet traffic will go on. It
is the user's responsibility to create these pass through tunnels on
appliances.
:return: Returns configured services
:rtype: dict<|endoftext|> |
27784eff53441e82fb8f82b103429a138acafd7ec6139707e3b9b6d150f54641 | def update_gms_internet_policy_services(self, services: dict) -> bool:
'Set a new service list used in Overlay editor\'s Internet Policy\n section.\n\n .. list-table::\n :header-rows: 1\n\n * - Swagger Section\n - Method\n - Endpoint\n * - services\n - POST\n - /gms/services\n\n Saving a new service list will completely replace the current\n implementation. Any service IDs that were saved previously, but not\n included in the POST body will be removed. These services will also\n be removed from the overlay\'s policy list.\n\n :param services: Dictionary of services to be set in the form\n ``{"SERVICE_1" : {"name":"SERVICE_NAME_1"}, ...}``\n :type services: dict\n :return: Returns True/False based on successful call\n :rtype: bool\n '
return self._post('/gms/services', data=services, return_type='bool') | Set a new service list used in Overlay editor's Internet Policy
section.
.. list-table::
:header-rows: 1
* - Swagger Section
- Method
- Endpoint
* - services
- POST
- /gms/services
Saving a new service list will completely replace the current
implementation. Any service IDs that were saved previously, but not
included in the POST body will be removed. These services will also
be removed from the overlay's policy list.
:param services: Dictionary of services to be set in the form
``{"SERVICE_1" : {"name":"SERVICE_NAME_1"}, ...}``
:type services: dict
:return: Returns True/False based on successful call
:rtype: bool | pyedgeconnect/orch/_services.py | update_gms_internet_policy_services | SPOpenSource/edgeconnect-python | 15 | python | def update_gms_internet_policy_services(self, services: dict) -> bool:
'Set a new service list used in Overlay editor\'s Internet Policy\n section.\n\n .. list-table::\n :header-rows: 1\n\n * - Swagger Section\n - Method\n - Endpoint\n * - services\n - POST\n - /gms/services\n\n Saving a new service list will completely replace the current\n implementation. Any service IDs that were saved previously, but not\n included in the POST body will be removed. These services will also\n be removed from the overlay\'s policy list.\n\n :param services: Dictionary of services to be set in the form\n ``{"SERVICE_1" : {"name":"SERVICE_NAME_1"}, ...}``\n :type services: dict\n :return: Returns True/False based on successful call\n :rtype: bool\n '
return self._post('/gms/services', data=services, return_type='bool') | def update_gms_internet_policy_services(self, services: dict) -> bool:
'Set a new service list used in Overlay editor\'s Internet Policy\n section.\n\n .. list-table::\n :header-rows: 1\n\n * - Swagger Section\n - Method\n - Endpoint\n * - services\n - POST\n - /gms/services\n\n Saving a new service list will completely replace the current\n implementation. Any service IDs that were saved previously, but not\n included in the POST body will be removed. These services will also\n be removed from the overlay\'s policy list.\n\n :param services: Dictionary of services to be set in the form\n ``{"SERVICE_1" : {"name":"SERVICE_NAME_1"}, ...}``\n :type services: dict\n :return: Returns True/False based on successful call\n :rtype: bool\n '
return self._post('/gms/services', data=services, return_type='bool')<|docstring|>Set a new service list used in Overlay editor's Internet Policy
section.
.. list-table::
:header-rows: 1
* - Swagger Section
- Method
- Endpoint
* - services
- POST
- /gms/services
Saving a new service list will completely replace the current
implementation. Any service IDs that were saved previously, but not
included in the POST body will be removed. These services will also
be removed from the overlay's policy list.
:param services: Dictionary of services to be set in the form
``{"SERVICE_1" : {"name":"SERVICE_NAME_1"}, ...}``
:type services: dict
:return: Returns True/False based on successful call
:rtype: bool<|endoftext|> |
bbd50850b04b4865c327752737ee796d9c82b6a1f8156834a387cd73f66f3c35 | def get_gms_third_party_services(self) -> dict:
"Get configured services used in Overlay editor's Internet Policy\n section.\n\n .. list-table::\n :header-rows: 1\n\n * - Swagger Section\n - Method\n - Endpoint\n * - services\n - GET\n - /gms/thirdPartyServices\n\n The list of services is used to provide options in the Overlay\n editor's Internet Policy section. For every service, a dynamically\n generated options list will be shown in the available options list.\n The service name will correspond to the peer name of the pass\n through tunnel the internet traffic will go on. Pass through tunnels\n will be generated automatically.\n\n :return: Returns configured 3rd party services\n :rtype: dict\n "
return self._get('/gms/thirdPartyServices') | Get configured services used in Overlay editor's Internet Policy
section.
.. list-table::
:header-rows: 1
* - Swagger Section
- Method
- Endpoint
* - services
- GET
- /gms/thirdPartyServices
The list of services is used to provide options in the Overlay
editor's Internet Policy section. For every service, a dynamically
generated options list will be shown in the available options list.
The service name will correspond to the peer name of the pass
through tunnel the internet traffic will go on. Pass through tunnels
will be generated automatically.
:return: Returns configured 3rd party services
:rtype: dict | pyedgeconnect/orch/_services.py | get_gms_third_party_services | SPOpenSource/edgeconnect-python | 15 | python | def get_gms_third_party_services(self) -> dict:
"Get configured services used in Overlay editor's Internet Policy\n section.\n\n .. list-table::\n :header-rows: 1\n\n * - Swagger Section\n - Method\n - Endpoint\n * - services\n - GET\n - /gms/thirdPartyServices\n\n The list of services is used to provide options in the Overlay\n editor's Internet Policy section. For every service, a dynamically\n generated options list will be shown in the available options list.\n The service name will correspond to the peer name of the pass\n through tunnel the internet traffic will go on. Pass through tunnels\n will be generated automatically.\n\n :return: Returns configured 3rd party services\n :rtype: dict\n "
return self._get('/gms/thirdPartyServices') | def get_gms_third_party_services(self) -> dict:
"Get configured services used in Overlay editor's Internet Policy\n section.\n\n .. list-table::\n :header-rows: 1\n\n * - Swagger Section\n - Method\n - Endpoint\n * - services\n - GET\n - /gms/thirdPartyServices\n\n The list of services is used to provide options in the Overlay\n editor's Internet Policy section. For every service, a dynamically\n generated options list will be shown in the available options list.\n The service name will correspond to the peer name of the pass\n through tunnel the internet traffic will go on. Pass through tunnels\n will be generated automatically.\n\n :return: Returns configured 3rd party services\n :rtype: dict\n "
return self._get('/gms/thirdPartyServices')<|docstring|>Get configured services used in Overlay editor's Internet Policy
section.
.. list-table::
:header-rows: 1
* - Swagger Section
- Method
- Endpoint
* - services
- GET
- /gms/thirdPartyServices
The list of services is used to provide options in the Overlay
editor's Internet Policy section. For every service, a dynamically
generated options list will be shown in the available options list.
The service name will correspond to the peer name of the pass
through tunnel the internet traffic will go on. Pass through tunnels
will be generated automatically.
:return: Returns configured 3rd party services
:rtype: dict<|endoftext|> |
bb6c39fcab7725efbd7c4648c3b11b651ae7c762a057d1bbf13b42be81edf306 | async def prepare_table(self, cursor: Cursor=None) -> None:
'テーブルの準備をします。クラスのインスタンス化時に自動で実行されます。\n また、クラスにキャッシュを作成します。'
(await cursor.execute(f'''CREATE TABLE IF NOT EXISTS {self.TABLE} (
GuildID BIGINT, Mode TEXT, Targets JSON
);'''))
(await self.update_cache(cursor=cursor)) | テーブルの準備をします。クラスのインスタンス化時に自動で実行されます。
また、クラスにキャッシュを作成します。 | cogs/blocker.py | prepare_table | RT-Team/rt-bot | 26 | python | async def prepare_table(self, cursor: Cursor=None) -> None:
'テーブルの準備をします。クラスのインスタンス化時に自動で実行されます。\n また、クラスにキャッシュを作成します。'
(await cursor.execute(f'CREATE TABLE IF NOT EXISTS {self.TABLE} (
GuildID BIGINT, Mode TEXT, Targets JSON
);'))
(await self.update_cache(cursor=cursor)) | async def prepare_table(self, cursor: Cursor=None) -> None:
'テーブルの準備をします。クラスのインスタンス化時に自動で実行されます。\n また、クラスにキャッシュを作成します。'
(await cursor.execute(f'CREATE TABLE IF NOT EXISTS {self.TABLE} (
GuildID BIGINT, Mode TEXT, Targets JSON
);'))
(await self.update_cache(cursor=cursor))<|docstring|>テーブルの準備をします。クラスのインスタンス化時に自動で実行されます。
また、クラスにキャッシュを作成します。<|endoftext|> |
2c8db64f3ec5427997b143ae81c602641c9af22cae8e18fd1a157d8bacdca57c | async def update_cache(self, cursor: Cursor=None) -> None:
'キャッシュをアップデートします。'
self.cache = defaultdict((lambda : defaultdict(list)))
(await cursor.execute(f'SELECT * FROM {self.TABLE};'))
for row in (await cursor.fetchall()):
if row:
self.cache[row[0]][row[1]] = loads(row[2]) | キャッシュをアップデートします。 | cogs/blocker.py | update_cache | RT-Team/rt-bot | 26 | python | async def update_cache(self, cursor: Cursor=None) -> None:
self.cache = defaultdict((lambda : defaultdict(list)))
(await cursor.execute(f'SELECT * FROM {self.TABLE};'))
for row in (await cursor.fetchall()):
if row:
self.cache[row[0]][row[1]] = loads(row[2]) | async def update_cache(self, cursor: Cursor=None) -> None:
self.cache = defaultdict((lambda : defaultdict(list)))
(await cursor.execute(f'SELECT * FROM {self.TABLE};'))
for row in (await cursor.fetchall()):
if row:
self.cache[row[0]][row[1]] = loads(row[2])<|docstring|>キャッシュをアップデートします。<|endoftext|> |
7d07e757e820645ba98ae1a490db4f42a3fdd19c148a0fecf9fbdf691f3c6179 | async def write(self, guild_id: int, mode: Mode, cursor: Cursor=None) -> bool:
'設定をします。'
if ((guild_id in self.cache) and (mode in self.cache[guild_id])):
(await cursor.execute(f'DELETE FROM {self.TABLE} WHERE GuildID = %s AND Mode = %s;', (guild_id, mode)))
del self.cache[guild_id][mode]
return False
else:
(await cursor.execute(f'INSERT INTO {self.TABLE} VALUES (%s, %s, %s);', (guild_id, mode, '[]')))
self.cache[guild_id][mode] = []
return True | 設定をします。 | cogs/blocker.py | write | RT-Team/rt-bot | 26 | python | async def write(self, guild_id: int, mode: Mode, cursor: Cursor=None) -> bool:
if ((guild_id in self.cache) and (mode in self.cache[guild_id])):
(await cursor.execute(f'DELETE FROM {self.TABLE} WHERE GuildID = %s AND Mode = %s;', (guild_id, mode)))
del self.cache[guild_id][mode]
return False
else:
(await cursor.execute(f'INSERT INTO {self.TABLE} VALUES (%s, %s, %s);', (guild_id, mode, '[]')))
self.cache[guild_id][mode] = []
return True | async def write(self, guild_id: int, mode: Mode, cursor: Cursor=None) -> bool:
if ((guild_id in self.cache) and (mode in self.cache[guild_id])):
(await cursor.execute(f'DELETE FROM {self.TABLE} WHERE GuildID = %s AND Mode = %s;', (guild_id, mode)))
del self.cache[guild_id][mode]
return False
else:
(await cursor.execute(f'INSERT INTO {self.TABLE} VALUES (%s, %s, %s);', (guild_id, mode, '[]')))
self.cache[guild_id][mode] = []
return True<|docstring|>設定をします。<|endoftext|> |
32bd3cc2e55f4becf2b5222cdcfd9d049448b8c425ae244d29d84132679e6181 | def assert_blocker(self, guild_id: int, mode: Mode) -> None:
'設定がされているかどうかのAssertionを行います。'
assert ((guild_id in self.cache) and (mode in self.cache[guild_id])), 'まだ設定がされていません。' | 設定がされているかどうかのAssertionを行います。 | cogs/blocker.py | assert_blocker | RT-Team/rt-bot | 26 | python | def assert_blocker(self, guild_id: int, mode: Mode) -> None:
assert ((guild_id in self.cache) and (mode in self.cache[guild_id])), 'まだ設定がされていません。' | def assert_blocker(self, guild_id: int, mode: Mode) -> None:
assert ((guild_id in self.cache) and (mode in self.cache[guild_id])), 'まだ設定がされていません。'<|docstring|>設定がされているかどうかのAssertionを行います。<|endoftext|> |
a2699518871253ca9dc058dab02796b70c3f01fe3ea7ef2dac5e128e5b10bbda | async def add_role(self, guild_id: int, mode: Mode, role: int, cursor: Cursor=None) -> None:
'ブロック対象のロールを追加します。'
self.assert_blocker(guild_id, mode)
self.cache[guild_id][mode].append(role)
(await self._update(cursor, guild_id, mode, self.cache[guild_id][mode])) | ブロック対象のロールを追加します。 | cogs/blocker.py | add_role | RT-Team/rt-bot | 26 | python | async def add_role(self, guild_id: int, mode: Mode, role: int, cursor: Cursor=None) -> None:
self.assert_blocker(guild_id, mode)
self.cache[guild_id][mode].append(role)
(await self._update(cursor, guild_id, mode, self.cache[guild_id][mode])) | async def add_role(self, guild_id: int, mode: Mode, role: int, cursor: Cursor=None) -> None:
self.assert_blocker(guild_id, mode)
self.cache[guild_id][mode].append(role)
(await self._update(cursor, guild_id, mode, self.cache[guild_id][mode]))<|docstring|>ブロック対象のロールを追加します。<|endoftext|> |
fbaab22e029bb1510eed145545a253273de2daf414ecf8e18e89d1fce08b56a3 | async def remove_role(self, guild_id: int, mode: Mode, role: int, cursor: Cursor=None) -> None:
'ブロック対象のロールを削除します。'
self.assert_blocker(guild_id, mode)
assert (len(self.cache[guild_id][mode]) < self.MAX_ROLES), '登録しすぎです。'
self.cache[guild_id][mode].remove(role)
(await self._update(cursor, guild_id, mode, [])) | ブロック対象のロールを削除します。 | cogs/blocker.py | remove_role | RT-Team/rt-bot | 26 | python | async def remove_role(self, guild_id: int, mode: Mode, role: int, cursor: Cursor=None) -> None:
self.assert_blocker(guild_id, mode)
assert (len(self.cache[guild_id][mode]) < self.MAX_ROLES), '登録しすぎです。'
self.cache[guild_id][mode].remove(role)
(await self._update(cursor, guild_id, mode, [])) | async def remove_role(self, guild_id: int, mode: Mode, role: int, cursor: Cursor=None) -> None:
self.assert_blocker(guild_id, mode)
assert (len(self.cache[guild_id][mode]) < self.MAX_ROLES), '登録しすぎです。'
self.cache[guild_id][mode].remove(role)
(await self._update(cursor, guild_id, mode, []))<|docstring|>ブロック対象のロールを削除します。<|endoftext|> |
ce70a5b4267903da23acb4364d6c994d2adeb476736bad8ea09d745b455cc3fe | @commands.group('blocker', aliases=['b', 'ブロッカー'], extras={'headding': {'ja': '絵文字,スタンプブロッカー', 'en': 'Emoji,Stamp blocker'}, 'parent': 'ServerSafety'})
async def blocker(self, ctx: commands.Context):
'!lang ja\n --------\n 特定のロールを持ってる人は絵文字またはスタンプを送信できないようにする機能です。\n\n Aliases\n -------\n b, ブロッカー\n\n !lang en\n --------\n This feature prevents people with a specific role from sending emoji or stamps.\n\n Aliases\n -------\n b'
if (not ctx.invoked_subcommand):
(await ctx.reply({'ja': '使用方法が違います。', 'en': 'It is wrong way to use this command.'})) | !lang ja
--------
特定のロールを持ってる人は絵文字またはスタンプを送信できないようにする機能です。
Aliases
-------
b, ブロッカー
!lang en
--------
This feature prevents people with a specific role from sending emoji or stamps.
Aliases
-------
b | cogs/blocker.py | blocker | RT-Team/rt-bot | 26 | python | @commands.group('blocker', aliases=['b', 'ブロッカー'], extras={'headding': {'ja': '絵文字,スタンプブロッカー', 'en': 'Emoji,Stamp blocker'}, 'parent': 'ServerSafety'})
async def blocker(self, ctx: commands.Context):
'!lang ja\n --------\n 特定のロールを持ってる人は絵文字またはスタンプを送信できないようにする機能です。\n\n Aliases\n -------\n b, ブロッカー\n\n !lang en\n --------\n This feature prevents people with a specific role from sending emoji or stamps.\n\n Aliases\n -------\n b'
if (not ctx.invoked_subcommand):
(await ctx.reply({'ja': '使用方法が違います。', 'en': 'It is wrong way to use this command.'})) | @commands.group('blocker', aliases=['b', 'ブロッカー'], extras={'headding': {'ja': '絵文字,スタンプブロッカー', 'en': 'Emoji,Stamp blocker'}, 'parent': 'ServerSafety'})
async def blocker(self, ctx: commands.Context):
'!lang ja\n --------\n 特定のロールを持ってる人は絵文字またはスタンプを送信できないようにする機能です。\n\n Aliases\n -------\n b, ブロッカー\n\n !lang en\n --------\n This feature prevents people with a specific role from sending emoji or stamps.\n\n Aliases\n -------\n b'
if (not ctx.invoked_subcommand):
(await ctx.reply({'ja': '使用方法が違います。', 'en': 'It is wrong way to use this command.'}))<|docstring|>!lang ja
--------
特定のロールを持ってる人は絵文字またはスタンプを送信できないようにする機能です。
Aliases
-------
b, ブロッカー
!lang en
--------
This feature prevents people with a specific role from sending emoji or stamps.
Aliases
-------
b<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.