text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calculate_sunrise_sunset(locator, calc_date=datetime.utcnow()):
"""calculates the next sunset and sunrise for a Maidenhead locator at a give date & time Args: locator1 (string):
Maidenhead Locator, either 4 or 6 characters calc_date (datetime, optional):
Starting datetime for the calculations (UTC) Returns: dict: Containing datetimes for morning_dawn, sunrise, evening_dawn, sunset Raises: ValueError: When called with wrong or invalid input arg AttributeError: When args are not a string Example: The following calculates the next sunrise & sunset for JN48QM on the 1./Jan/2014 { 'morning_dawn': datetime.datetime(2014, 1, 1, 6, 36, 51, 710524, tzinfo=<UTC>), 'sunset': datetime.datetime(2014, 1, 1, 16, 15, 23, 31016, tzinfo=<UTC>), 'evening_dawn': datetime.datetime(2014, 1, 1, 15, 38, 8, 355315, tzinfo=<UTC>), 'sunrise': datetime.datetime(2014, 1, 1, 7, 14, 6, 162063, tzinfo=<UTC>) } """ |
morning_dawn = None
sunrise = None
evening_dawn = None
sunset = None
latitude, longitude = locator_to_latlong(locator)
if type(calc_date) != datetime:
raise ValueError
sun = ephem.Sun()
home = ephem.Observer()
home.lat = str(latitude)
home.long = str(longitude)
home.date = calc_date
sun.compute(home)
try:
nextrise = home.next_rising(sun)
nextset = home.next_setting(sun)
home.horizon = '-6'
beg_twilight = home.next_rising(sun, use_center=True)
end_twilight = home.next_setting(sun, use_center=True)
morning_dawn = beg_twilight.datetime()
sunrise = nextrise.datetime()
evening_dawn = nextset.datetime()
sunset = end_twilight.datetime()
#if sun never sets or rises (e.g. at polar circles)
except ephem.AlwaysUpError as e:
morning_dawn = None
sunrise = None
evening_dawn = None
sunset = None
except ephem.NeverUpError as e:
morning_dawn = None
sunrise = None
evening_dawn = None
sunset = None
result = {}
result['morning_dawn'] = morning_dawn
result['sunrise'] = sunrise
result['evening_dawn'] = evening_dawn
result['sunset'] = sunset
if morning_dawn:
result['morning_dawn'] = morning_dawn.replace(tzinfo=UTC)
if sunrise:
result['sunrise'] = sunrise.replace(tzinfo=UTC)
if evening_dawn:
result['evening_dawn'] = evening_dawn.replace(tzinfo=UTC)
if sunset:
result['sunset'] = sunset.replace(tzinfo=UTC)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cloudpickle_dumps(obj, dumper=cloudpickle.dumps):
""" Encode Python objects into a byte stream using cloudpickle. """ |
return dumper(obj, protocol=serialization.pickle_protocol) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def patch_celery():
""" Monkey patch Celery to use cloudpickle instead of pickle. """ |
registry = serialization.registry
serialization.pickle = cloudpickle
registry.unregister('pickle')
registry.register('pickle', cloudpickle_dumps, cloudpickle_loads,
content_type='application/x-python-serialize',
content_encoding='binary')
import celery.worker as worker
import celery.concurrency.asynpool as asynpool
worker.state.pickle = cloudpickle
asynpool._pickle = cloudpickle
import billiard.common
billiard.common.pickle = cloudpickle
billiard.common.pickle_dumps = cloudpickle_dumps
billiard.common.pickle_loads = cloudpickle_loads |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect(self):
""" Connects to the redis database. """ |
self._connection = StrictRedis(
host=self._host,
port=self._port,
db=self._database,
password=self._password) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def receive(self):
""" Returns a single request. Takes the first request from the list of requests and returns it. If the list is empty, None is returned. Returns: Response: If a new request is available a Request object is returned, otherwise None is returned. """ |
pickled_request = self._connection.connection.lpop(self._request_key)
return pickle.loads(pickled_request) if pickled_request is not None else None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send(self, response):
""" Send a response back to the client that issued a request. Args: response (Response):
Reference to the response object that should be sent. """ |
self._connection.connection.set('{}:{}'.format(SIGNAL_REDIS_PREFIX, response.uid),
pickle.dumps(response)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def restore(self, request):
""" Push the request back onto the queue. Args: request (Request):
Reference to a request object that should be pushed back onto the request queue. """ |
self._connection.connection.rpush(self._request_key, pickle.dumps(request)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send(self, request):
""" Send a request to the server and wait for its response. Args: request (Request):
Reference to a request object that is sent to the server. Returns: Response: The response from the server to the request. """ |
self._connection.connection.rpush(self._request_key, pickle.dumps(request))
resp_key = '{}:{}'.format(SIGNAL_REDIS_PREFIX, request.uid)
while True:
if self._connection.polling_time > 0.0:
sleep(self._connection.polling_time)
response_data = self._connection.connection.get(resp_key)
if response_data is not None:
self._connection.connection.delete(resp_key)
break
return pickle.loads(response_data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def verify_pattern(pattern):
"""Verifies if pattern for matching and finding fulfill expected structure. :param pattern: string pattern to verify :return: True if pattern has proper syntax, False otherwise """ |
regex = re.compile("^!?[a-zA-Z]+$|[*]{1,2}$")
def __verify_pattern__(__pattern__):
if not __pattern__:
return False
elif __pattern__[0] == "!":
return __verify_pattern__(__pattern__[1:])
elif __pattern__[0] == "[" and __pattern__[-1] == "]":
return all(__verify_pattern__(p) for p in __pattern__[1:-1].split(","))
else:
return regex.match(__pattern__)
return all(__verify_pattern__(p) for p in pattern.split("/")) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def match_tree(sentence, pattern):
"""Matches given sentence with provided pattern. :param sentence: sentence from Spacy(see: http://spacy.io/docs/#doc-spans-sents) representing complete statement :param pattern: pattern to which sentence will be compared :return: True if sentence match to pattern, False otherwise :raises: PatternSyntaxException: if pattern has wrong syntax """ |
if not verify_pattern(pattern):
raise PatternSyntaxException(pattern)
def _match_node(t, p):
pat_node = p.pop(0) if p else ""
return not pat_node or (_match_token(t, pat_node, False) and _match_edge(t.children,p))
def _match_edge(edges,p):
pat_edge = p.pop(0) if p else ""
if not pat_edge:
return True
elif not edges:
return False
else:
for (t) in edges:
if (_match_token(t, pat_edge, True)) and _match_node(t, list(p)):
return True
elif pat_edge == "**" and _match_edge(t.children, ["**"] + p):
return True
return False
return _match_node(sentence.root, pattern.split("/")) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def split (s, delimter, trim = True, limit = 0):
# pragma: no cover """ Split a string using a single-character delimter @params: `s`: the string `delimter`: the single-character delimter `trim`: whether to trim each part. Default: True @examples: ```python ret = split("'a,b',c", ",") # ret == ["'a,b'", "c"] # ',' inside quotes will be recognized. ``` @returns: The list of substrings """ |
ret = []
special1 = ['(', ')', '[', ']', '{', '}']
special2 = ['\'', '"']
special3 = '\\'
flags1 = [0, 0, 0]
flags2 = [False, False]
flags3 = False
start = 0
nlim = 0
for i, c in enumerate(s):
if c == special3:
# next char is escaped
flags3 = not flags3
elif not flags3:
# no escape
if c in special1:
index = special1.index(c)
if index % 2 == 0:
flags1[int(index/2)] += 1
else:
flags1[int(index/2)] -= 1
elif c in special2:
index = special2.index(c)
flags2[index] = not flags2[index]
elif c == delimter and not any(flags1) and not any(flags2):
r = s[start:i]
if trim: r = r.strip()
ret.append(r)
start = i + 1
nlim = nlim + 1
if limit and nlim >= limit:
break
else:
# escaping closed
flags3 = False
r = s[start:]
if trim: r = r.strip()
ret.append(r)
return ret |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def render(self, **context):
""" Render this template by applying it to `context`. @params: `context`: a dictionary of values to use in this rendering. @returns: The rendered string """ |
# Make the complete context we'll use.
localns = self.envs.copy()
localns.update(context)
try:
exec(str(self.code), None, localns)
return localns[Liquid.COMPLIED_RENDERED_STR]
except Exception:
stacks = list(reversed(traceback.format_exc().splitlines()))
for stack in stacks:
stack = stack.strip()
if stack.startswith('File "<string>"'):
lineno = int(stack.split(', ')[1].split()[-1])
source = []
if 'NameError:' in stacks[0]:
source.append('Do you forget to provide the data?')
import math
source.append('\nCompiled source (use debug mode to see full source):')
source.append('---------------------------------------------------')
nlines = len(self.code.codes)
nbit = int(math.log(nlines, 10)) + 3
for i, line in enumerate(self.code.codes):
if i - 7 > lineno or i + 9 < lineno: continue
if i + 1 != lineno:
source.append(' ' + (str(i+1) + '.').ljust(nbit) + str(line).rstrip())
else:
source.append('* ' + (str(i+1) + '.').ljust(nbit) + str(line).rstrip())
raise LiquidRenderError(
stacks[0],
repr(self.code.codes[lineno - 1]) +
'\n' + '\n'.join(source) +
'\n\nPREVIOUS EXCEPTION:\n------------------\n' +
'\n'.join(stacks) + '\n' +
'\nCONTEXT:\n------------------\n' +
'\n'.join(
' ' + key + ': ' + str(val)
for key, val in localns.items() if not key.startswith('_liquid_') and not key.startswith('__')
) + '\n'
)
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def addLine(self, line):
""" Add a line of source to the code. Indentation and newline will be added for you, don't provide them. @params: `line`: The line to add """ |
if not isinstance(line, LiquidLine):
line = LiquidLine(line)
line.ndent = self.ndent
self.codes.append(line) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def level(self, lvl=None):
'''Get or set the logging level.'''
if not lvl:
return self._lvl
self._lvl = self._parse_level(lvl)
self.stream.setLevel(self._lvl)
logging.root.setLevel(self._lvl) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_rate_from_db(currency: str) -> Decimal: """ Fetch currency conversion rate from the database """ |
from .models import ConversionRate
try:
rate = ConversionRate.objects.get_rate(currency)
except ConversionRate.DoesNotExist: # noqa
raise ValueError('No conversion rate for %s' % (currency, ))
return rate.rate |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_conversion_rate(from_currency: str, to_currency: str) -> Decimal: """ Get conversion rate to use in exchange """ |
reverse_rate = False
if to_currency == BASE_CURRENCY:
# Fetch exchange rate for base currency and use 1 / rate for conversion
rate_currency = from_currency
reverse_rate = True
else:
rate_currency = to_currency
rate = get_rate_from_db(rate_currency)
if reverse_rate:
conversion_rate = Decimal(1) / rate
else:
conversion_rate = rate
return conversion_rate |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_precipitation_stats(self, months=None, avg_stats=True, percentile=50):
""" Calculates precipitation statistics for the cascade model while aggregating hourly observations Parameters months : Months for each seasons to be used for statistics (array of numpy array, default=1-12, e.g., [np.arange(12) + 1]) avg_stats : average statistics for all levels True/False (default=True) percentile : percentil for splitting the dataset in small and high intensities (default=50) """ |
if months is None:
months = [np.arange(12) + 1]
self.precip.months = months
self.precip.stats = melodist.build_casc(self.data, months=months, avg_stats=avg_stats, percentile=percentile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_wind_stats(self):
""" Calculates statistics in order to derive diurnal patterns of wind speed """ |
a, b, t_shift = melodist.fit_cosine_function(self.data.wind)
self.wind.update(a=a, b=b, t_shift=t_shift) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_humidity_stats(self):
""" Calculates statistics in order to derive diurnal patterns of relative humidity. """ |
a1, a0 = melodist.calculate_dewpoint_regression(self.data, return_stats=False)
self.hum.update(a0=a0, a1=a1)
self.hum.kr = 12
self.hum.month_hour_precip_mean = melodist.calculate_month_hour_precip_mean(self.data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_temperature_stats(self):
""" Calculates statistics in order to derive diurnal patterns of temperature """ |
self.temp.max_delta = melodist.get_shift_by_data(self.data.temp, self._lon, self._lat, self._timezone)
self.temp.mean_course = melodist.util.calculate_mean_daily_course_by_month(self.data.temp, normalize=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json(self, filename=None):
""" Exports statistical data to a JSON formatted file Parameters filename: output file that holds statistics data """ |
def json_encoder(obj):
if isinstance(obj, pd.DataFrame) or isinstance(obj, pd.Series):
if isinstance(obj.index, pd.core.index.MultiIndex):
obj = obj.reset_index() # convert MultiIndex to columns
return json.loads(obj.to_json(date_format='iso'))
elif isinstance(obj, melodist.cascade.CascadeStatistics):
return obj.__dict__
elif isinstance(obj, np.ndarray):
return obj.tolist()
else:
raise TypeError('%s not supported' % type(obj))
d = dict(
temp=self.temp,
wind=self.wind,
precip=self.precip,
hum=self.hum,
glob=self.glob
)
j = json.dumps(d, default=json_encoder, indent=4)
if filename is None:
return j
else:
with open(filename, 'w') as f:
f.write(j) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_json(cls, filename):
""" Imports statistical data from a JSON formatted file Parameters filename: input file that holds statistics data """ |
def json_decoder(d):
if 'p01' in d and 'pxx' in d: # we assume this is a CascadeStatistics object
return melodist.cascade.CascadeStatistics.from_dict(d)
return d
with open(filename) as f:
d = json.load(f, object_hook=json_decoder)
stats = cls()
stats.temp.update(d['temp'])
stats.hum.update(d['hum'])
stats.precip.update(d['precip'])
stats.wind.update(d['wind'])
stats.glob.update(d['glob'])
if stats.temp.max_delta is not None:
stats.temp.max_delta = pd.read_json(json.dumps(stats.temp.max_delta), typ='series').sort_index()
if stats.temp.mean_course is not None:
mc = pd.read_json(json.dumps(stats.temp.mean_course), typ='frame').sort_index()[np.arange(1, 12 + 1)]
stats.temp.mean_course = mc.sort_index()[np.arange(1, 12 + 1)]
if stats.hum.month_hour_precip_mean is not None:
mhpm = pd.read_json(json.dumps(stats.hum.month_hour_precip_mean), typ='frame').sort_index()
mhpm = mhpm.set_index(['level_0', 'level_1', 'level_2']) # convert to MultiIndex
mhpm = mhpm.squeeze() # convert to Series
mhpm = mhpm.rename_axis([None, None, None]) # remove index labels
stats.hum.month_hour_precip_mean = mhpm
for var in ('angstroem', 'bristcamp', 'mean_course'):
if stats.glob[var] is not None:
stats.glob[var] = pd.read_json(json.dumps(stats.glob[var])).sort_index()
if stats.glob.mean_course is not None:
stats.glob.mean_course = stats.glob.mean_course[np.arange(1, 12 + 1)]
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def canonical(self):
"""Return a tuple containing a canonicalized version of this location's country, state, county, and city names.""" |
try:
return tuple(map(lambda x: x.lower(), self.name()))
except:
return tuple([x.lower() for x in self.name()]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def name(self):
"""Return a tuple containing this location's country, state, county, and city names.""" |
try:
return tuple(
getattr(self, x) if getattr(self, x) else u''
for x in ('country', 'state', 'county', 'city'))
except:
return tuple(
getattr(self, x) if getattr(self, x) else ''
for x in ('country', 'state', 'county', 'city')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parent(self):
"""Return a location representing the administrative unit above the one represented by this location.""" |
if self.city:
return Location(
country=self.country, state=self.state, county=self.county)
if self.county:
return Location(country=self.country, state=self.state)
if self.state:
return Location(country=self.country)
return Location() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def disaggregate_humidity(data_daily, method='equal', temp=None, a0=None, a1=None, kr=None, month_hour_precip_mean=None, preserve_daily_mean=False):
"""general function for humidity disaggregation Args: daily_data: daily values method: keyword specifying the disaggregation method to be used temp: hourly temperature time series (necessary for some methods) kr: parameter for linear_dewpoint_variation method (6 or 12) month_hour_precip_mean: [month, hour, precip(y/n)] categorical mean values preserve_daily_mean: if True, correct the daily mean values of the disaggregated data with the observed daily means. Returns: Disaggregated hourly values of relative humidity. """ |
assert method in ('equal',
'minimal',
'dewpoint_regression',
'min_max',
'linear_dewpoint_variation',
'month_hour_precip_mean'), 'Invalid option'
if method == 'equal':
hum_disagg = melodist.distribute_equally(data_daily.hum)
elif method in ('minimal', 'dewpoint_regression', 'linear_dewpoint_variation'):
if method == 'minimal':
a0 = 0
a1 = 1
assert a0 is not None and a1 is not None, 'a0 and a1 must be specified'
tdew_daily = a0 + a1 * data_daily.tmin
tdew = melodist.distribute_equally(tdew_daily)
if method == 'linear_dewpoint_variation':
assert kr is not None, 'kr must be specified'
assert kr in (6, 12), 'kr must be 6 or 12'
tdew_delta = 0.5 * np.sin((temp.index.hour + 1) * np.pi / kr - 3. * np.pi / 4.) # eq. (21) from Debele et al. (2007)
tdew_nextday = tdew.shift(-24)
tdew_nextday.iloc[-24:] = tdew.iloc[-24:] # copy the last day
# eq. (20) from Debele et al. (2007):
# (corrected - the equation is wrong both in Debele et al. (2007) and Bregaglio et al. (2010) - it should
# be (T_dp,day)_(d+1) - (T_dp,day)_d instead of the other way around)
tdew += temp.index.hour / 24. * (tdew_nextday - tdew) + tdew_delta
sat_vap_press_tdew = util.vapor_pressure(tdew, 100)
sat_vap_press_t = util.vapor_pressure(temp, 100)
hum_disagg = pd.Series(index=temp.index, data=100 * sat_vap_press_tdew / sat_vap_press_t)
elif method == 'min_max':
assert 'hum_min' in data_daily.columns and 'hum_max' in data_daily.columns, \
'Minimum and maximum humidity must be present in data frame'
hmin = melodist.distribute_equally(data_daily.hum_min)
hmax = melodist.distribute_equally(data_daily.hum_max)
tmin = melodist.distribute_equally(data_daily.tmin)
tmax = melodist.distribute_equally(data_daily.tmax)
hum_disagg = hmax + (temp - tmin) / (tmax - tmin) * (hmin - hmax)
elif method == 'month_hour_precip_mean':
assert month_hour_precip_mean is not None
precip_equal = melodist.distribute_equally(data_daily.precip) # daily precipitation equally distributed to hourly values
hum_disagg = pd.Series(index=precip_equal.index)
locs = list(zip(hum_disagg.index.month, hum_disagg.index.hour, precip_equal > 0))
hum_disagg[:] = month_hour_precip_mean.loc[locs].values
if preserve_daily_mean:
daily_mean_df = pd.DataFrame(data=dict(obs=data_daily.hum, disagg=hum_disagg.resample('D').mean()))
bias = melodist.util.distribute_equally(daily_mean_df.disagg - daily_mean_df.obs)
bias = bias.fillna(0)
hum_disagg -= bias
return hum_disagg.clip(0, 100) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _cosine_function(x, a, b, t_shift):
"""genrates a diurnal course of windspeed accroding to the cosine function Args: x: series of euqally distributed windspeed values a: parameter a for the cosine function b: parameter b for the cosine function t_shift: parameter t_shift for the cosine function Returns: series including diurnal course of windspeed. """ |
mean_wind, t = x
return a * mean_wind * np.cos(np.pi * (t - t_shift) / 12) + b * mean_wind |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def disaggregate_wind(wind_daily, method='equal', a=None, b=None, t_shift=None):
"""general function for windspeed disaggregation Args: wind_daily: daily values method: keyword specifying the disaggregation method to be used a: parameter a for the cosine function b: parameter b for the cosine function t_shift: parameter t_shift for the cosine function Returns: Disaggregated hourly values of windspeed. """ |
assert method in ('equal', 'cosine', 'random'), 'Invalid method'
wind_eq = melodist.distribute_equally(wind_daily)
if method == 'equal':
wind_disagg = wind_eq
elif method == 'cosine':
assert None not in (a, b, t_shift)
wind_disagg = _cosine_function(np.array([wind_eq.values, wind_eq.index.hour]), a, b, t_shift)
elif method == 'random':
wind_disagg = wind_eq * (-np.log(np.random.rand(len(wind_eq))))**0.3
return wind_disagg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit_cosine_function(wind):
"""fits a cosine function to observed hourly windspeed data Args: wind: observed hourly windspeed data Returns: parameters needed to generate diurnal features of windspeed using a cosine function """ |
wind_daily = wind.groupby(wind.index.date).mean()
wind_daily_hourly = pd.Series(index=wind.index, data=wind_daily.loc[wind.index.date].values) # daily values evenly distributed over the hours
df = pd.DataFrame(data=dict(daily=wind_daily_hourly, hourly=wind)).dropna(how='any')
x = np.array([df.daily, df.index.hour])
popt, pcov = scipy.optimize.curve_fit(_cosine_function, x, df.hourly)
return popt |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_smet(filename, data, metadata, nodata_value=-999, mode='h', check_nan=True):
"""writes smet files Parameters ---- filename : filename/loction of output data : data to write as pandas df metadata: header to write input as dict nodata_value: Nodata Value to write/use mode: defines if to write daily ("d") or continuos data (default 'h') check_nan: will check if only nans in data and if true will not write this colums (default True) """ |
# dictionary
# based on smet spec V.1.1 and selfdefined
# daily data
dict_d= {'tmean':'TA',
'tmin':'TMAX', #no spec
'tmax':'TMIN', #no spec
'precip':'PSUM',
'glob':'ISWR', #no spec
'hum':'RH',
'wind':'VW'
}
#hourly data
dict_h= {'temp':'TA',
'precip':'PSUM',
'glob':'ISWR', #no spec
'hum':'RH',
'wind':'VW'
}
#rename columns
if mode == "d":
data = data.rename(columns=dict_d)
if mode == "h":
data = data.rename(columns=dict_h)
if check_nan:
#get all colums with data
datas_in = data.sum().dropna().to_frame().T
#get colums with no datas
drop = [data_nan for data_nan in data.columns if data_nan not in datas_in]
#delete columns
data = data.drop(drop, axis=1)
with open(filename, 'w') as f:
#preparing data
#converte date_times to SMET timestamps
if mode == "d":
t = '%Y-%m-%dT00:00'
if mode == "h":
t = '%Y-%m-%dT%H:%M'
data['timestamp'] = [d.strftime(t) for d in data.index]
cols = data.columns.tolist()
cols = cols[-1:] + cols[:-1]
data = data[cols]
#metadatas update
metadata['fields'] = ' '.join(data.columns)
metadata["units_multiplier"] = len(metadata['fields'].split())*"1 "
#writing data
#metadata
f.write('SMET 1.1 ASCII\n')
f.write('[HEADER]\n')
for k, v in metadata.items():
f.write('{} = {}\n'.format(k, v))
#data
f.write('[DATA]\n')
data_str = data.fillna(nodata_value).to_string(
header=False,
index=False,
float_format=lambda x: '{:.2f}'.format(x),
)
f.write(data_str) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_single_knmi_file(filename):
"""reads a single file of KNMI's meteorological time series data availability: www.knmi.nl/nederland-nu/klimatologie/uurgegevens Args: filename: the file to be opened Returns: pandas data frame including time series """ |
hourly_data_obs_raw = pd.read_csv(
filename,
parse_dates=[['YYYYMMDD', 'HH']],
date_parser=lambda yyyymmdd, hh: pd.datetime(int(str(yyyymmdd)[0:4]),
int(str(yyyymmdd)[4:6]),
int(str(yyyymmdd)[6:8]),
int(hh) - 1),
skiprows=31,
skipinitialspace=True,
na_values='',
keep_date_col=True,
)
hourly_data_obs_raw.index = hourly_data_obs_raw['YYYYMMDD_HH']
hourly_data_obs_raw.index = hourly_data_obs_raw.index + pd.Timedelta(hours=1)
columns_hourly = ['temp', 'precip', 'glob', 'hum', 'wind', 'ssd']
hourly_data_obs = pd.DataFrame(
index=hourly_data_obs_raw.index,
columns=columns_hourly,
data=dict(
temp=hourly_data_obs_raw['T'] / 10 + 273.15,
precip=hourly_data_obs_raw['RH'] / 10,
glob=hourly_data_obs_raw['Q'] * 10000 / 3600.,
hum=hourly_data_obs_raw['U'],
wind=hourly_data_obs_raw['FH'] / 10,
ssd=hourly_data_obs_raw['SQ'] * 6,
),
)
# remove negative values
negative_values = hourly_data_obs['precip'] < 0.0
hourly_data_obs.loc[negative_values, 'precip'] = 0.0
return hourly_data_obs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read_knmi_dataset(directory):
"""Reads files from a directory and merges the time series Please note: For each station, a separate directory must be provided! data availability: www.knmi.nl/nederland-nu/klimatologie/uurgegevens Args: directory: directory including the files Returns: pandas data frame including time series """ |
filemask = '%s*.txt' % directory
filelist = glob.glob(filemask)
columns_hourly = ['temp', 'precip', 'glob', 'hum', 'wind', 'ssd']
ts = pd.DataFrame(columns=columns_hourly)
first_call = True
for file_i in filelist:
print(file_i)
current = read_single_knmi_file(file_i)
if(first_call):
ts = current
first_call = False
else:
ts = pd.concat([ts, current])
return ts |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def disaggregate_wind(self, method='equal'):
""" Disaggregate wind speed. Parameters method : str, optional Disaggregation method. ``equal`` Mean daily wind speed is duplicated for the 24 hours of the day. (Default) ``cosine`` Distributes daily mean wind speed using a cosine function derived from hourly observations. ``random`` Draws random numbers to distribute wind speed (usually not conserving the daily average). """ |
self.data_disagg.wind = melodist.disaggregate_wind(self.data_daily.wind, method=method, **self.statistics.wind) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def disaggregate_humidity(self, method='equal', preserve_daily_mean=False):
""" Disaggregate relative humidity. Parameters method : str, optional Disaggregation method. ``equal`` Mean daily humidity is duplicated for the 24 hours of the day. (Default) ``minimal``: Calculates humidity from daily dew point temperature by setting the dew point temperature equal to the daily minimum temperature. ``dewpoint_regression``: Calculates humidity from daily dew point temperature by calculating dew point temperature using ``Tdew = a * Tmin + b``, where ``a`` and ``b`` are determined by calibration. ``linear_dewpoint_variation``: Calculates humidity from hourly dew point temperature by assuming a linear dew point temperature variation between consecutive days. ``min_max``: Calculates hourly humidity from observations of daily minimum and maximum humidity. ``month_hour_precip_mean``: Calculates hourly humidity from categorical [month, hour, precip(y/n)] mean values derived from observations. preserve_daily_mean : bool, optional If True, correct the daily mean values of the disaggregated data with the observed daily means. """ |
self.data_disagg.hum = melodist.disaggregate_humidity(
self.data_daily,
temp=self.data_disagg.temp,
method=method,
preserve_daily_mean=preserve_daily_mean,
**self.statistics.hum
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def disaggregate_temperature(self, method='sine_min_max', min_max_time='fix', mod_nighttime=False):
""" Disaggregate air temperature. Parameters method : str, optional Disaggregation method. ``sine_min_max`` Hourly temperatures follow a sine function preserving daily minimum and maximum values. (Default) ``sine_mean`` Hourly temperatures follow a sine function preserving the daily mean value and the diurnal temperature range. ``sine`` Same as ``sine_min_max``. ``mean_course_min_max`` Hourly temperatures follow an observed average course (calculated for each month), preserving daily minimum and maximum values. ``mean_course_mean`` Hourly temperatures follow an observed average course (calculated for each month), preserving the daily mean value and the diurnal temperature range. min_max_time : str, optional Method to determine the time of minimum and maximum temperature. ``fix``: Minimum/maximum temperature are assumed to occur at 07:00/14:00 local time. ``sun_loc``: Minimum/maximum temperature are assumed to occur at sunrise / solar noon + 2 h. ``sun_loc_shift``: Minimum/maximum temperature are assumed to occur at sunrise / solar noon + monthly mean shift. mod_nighttime : bool, optional Use linear interpolation between minimum and maximum temperature. """ |
self.data_disagg.temp = melodist.disaggregate_temperature(
self.data_daily,
method=method,
min_max_time=min_max_time,
max_delta=self.statistics.temp.max_delta,
mean_course=self.statistics.temp.mean_course,
sun_times=self.sun_times,
mod_nighttime=mod_nighttime
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def disaggregate_precipitation(self, method='equal', zerodiv='uniform', shift=0, master_precip=None):
""" Disaggregate precipitation. Parameters method : str, optional Disaggregation method. ``equal`` Daily precipitation is distributed equally over the 24 hours of the day. (Default) ``cascade`` Hourly precipitation values are obtained using a cascade model set up using hourly observations. zerodiv : str, optional Method to deal with zero division, relevant for ``method='masterstation'``. ``uniform`` Use uniform distribution. (Default) master_precip : Series, optional Hourly precipitation records from a representative station (required for ``method='masterstation'``). """ |
if method == 'equal':
precip_disagg = melodist.disagg_prec(self.data_daily, method=method, shift=shift)
elif method == 'cascade':
precip_disagg = pd.Series(index=self.data_disagg.index)
for months, stats in zip(self.statistics.precip.months, self.statistics.precip.stats):
precip_daily = melodist.seasonal_subset(self.data_daily.precip, months=months)
if len(precip_daily) > 1:
data = melodist.disagg_prec(precip_daily, method=method, cascade_options=stats,
shift=shift, zerodiv=zerodiv)
precip_disagg.loc[data.index] = data
elif method == 'masterstation':
precip_disagg = melodist.precip_master_station(self.data_daily.precip, master_precip, zerodiv)
self.data_disagg.precip = precip_disagg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def disaggregate_radiation(self, method='pot_rad', pot_rad=None):
""" Disaggregate solar radiation. Parameters method : str, optional Disaggregation method. ``pot_rad`` Calculates potential clear-sky hourly radiation and scales it according to the mean daily radiation. (Default) ``pot_rad_via_ssd`` Calculates potential clear-sky hourly radiation and scales it according to the observed daily sunshine duration. ``pot_rad_via_bc`` Calculates potential clear-sky hourly radiation and scales it according to daily minimum and maximum temperature. ``mean_course`` Hourly radiation follows an observed average course (calculated for each month). pot_rad : Series, optional Hourly values of potential solar radiation. If ``None``, calculated internally. """ |
if self.sun_times is None:
self.calc_sun_times()
if pot_rad is None and method != 'mean_course':
pot_rad = melodist.potential_radiation(self.data_disagg.index, self.lon, self.lat, self.timezone)
self.data_disagg.glob = melodist.disaggregate_radiation(
self.data_daily,
sun_times=self.sun_times,
pot_rad=pot_rad,
method=method,
angstr_a=self.statistics.glob.angstroem.a,
angstr_b=self.statistics.glob.angstroem.b,
bristcamp_a=self.statistics.glob.bristcamp.a,
bristcamp_c=self.statistics.glob.bristcamp.c,
mean_course=self.statistics.glob.mean_course
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def interpolate(self, column_hours, method='linear', limit=24, limit_direction='both', **kwargs):
""" Wrapper function for ``pandas.Series.interpolate`` that can be used to "disaggregate" values using various interpolation methods. Parameters column_hours : dict Dictionary containing column names in ``data_daily`` and the hour values they should be associated to. method, limit, limit_direction, **kwargs These parameters are passed on to ``pandas.Series.interpolate``. Examples -------- Assume that ``mystation.data_daily.T7``, ``mystation.data_daily.T14``, and ``mystation.data_daily.T19`` contain air temperature measurements taken at 07:00, 14:00, and 19:00. We can use the interpolation functions provided by pandas/scipy to derive hourly values: """ |
kwargs = dict(kwargs, method=method, limit=limit, limit_direction=limit_direction)
data = melodist.util.prepare_interpolation_data(self.data_daily, column_hours)
return data.interpolate(**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _query_helper(self, by=None):
""" Internal helper for preparing queries. """ |
if by is None:
primary_keys = self.table.primary_key.columns.keys()
if len(primary_keys) > 1:
warnings.warn("WARNING: MORE THAN 1 PRIMARY KEY FOR TABLE %s. "
"USING THE FIRST KEY %s." %
(self.table.name, primary_keys[0]))
if not primary_keys:
raise NoPrimaryKeyException("Table %s needs a primary key for"
"the .last() method to work properly. "
"Alternatively, specify an ORDER BY "
"column with the by= argument. " %
self.table.name)
id_col = primary_keys[0]
else:
id_col = by
if self.column is None:
col = "*"
else:
col = self.column.name
return col, id_col |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query(self, sql_query, return_as="dataframe"):
""" Execute a raw SQL query against the the SQL DB. Args: sql_query (str):
A raw SQL query to execute. Kwargs: return_as (str):
Specify what type of object should be returned. The following are acceptable types: - "dataframe": pandas.DataFrame or None if no matching query - "result": sqlalchemy.engine.result.ResultProxy Returns: result (pandas.DataFrame or sqlalchemy ResultProxy):
Query result as a DataFrame (default) or sqlalchemy result (specified with return_as="result") Raises: QueryDbError """ |
if isinstance(sql_query, str):
pass
elif isinstance(sql_query, unicode):
sql_query = str(sql_query)
else:
raise QueryDbError("query() requires a str or unicode input.")
query = sqlalchemy.sql.text(sql_query)
if return_as.upper() in ["DF", "DATAFRAME"]:
return self._to_df(query, self._engine)
elif return_as.upper() in ["RESULT", "RESULTPROXY"]:
with self._engine.connect() as conn:
result = conn.execute(query)
return result
else:
raise QueryDbError("Other return types not implemented.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set_metadata(self):
""" Internal helper to set metadata attributes. """ |
meta = QueryDbMeta()
with self._engine.connect() as conn:
meta.bind = conn
meta.reflect()
self._meta = meta
# Set an inspect attribute, whose subattributes
# return individual tables / columns. Tables and columns
# are special classes with .last() and other convenience methods
self.inspect = QueryDbAttributes()
for table in self._meta.tables:
setattr(self.inspect, table,
QueryDbOrm(self._meta.tables[table], self))
table_attr = getattr(self.inspect, table)
table_cols = table_attr.table.columns
for col in table_cols.keys():
setattr(table_attr, col,
QueryDbOrm(table_cols[col], self))
# Finally add some summary info:
# Table name
# Primary Key item or list
# N of Cols
# Distinct Col Values (class so NVARCHAR(20) and NVARCHAR(30) are not different)
primary_keys = table_attr.table.primary_key.columns.keys()
self._summary_info.append((
table,
primary_keys[0] if len(primary_keys) == 1 else primary_keys,
len(table_cols),
len(set([x.type.__class__ for x in table_cols.values()])),
)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _to_df(self, query, conn, index_col=None, coerce_float=True, params=None, parse_dates=None, columns=None):
""" Internal convert-to-DataFrame convenience wrapper. """ |
return pd.io.sql.read_sql(str(query), conn, index_col=index_col,
coerce_float=coerce_float, params=params,
parse_dates=parse_dates, columns=columns) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def distribute_equally(daily_data, divide=False):
"""Obtains hourly values by equally distributing the daily values. Args: daily_data: daily values divide: if True, divide resulting values by the number of hours in order to preserve the daily sum (required e.g. for precipitation). Returns: Equally distributed hourly values. """ |
index = hourly_index(daily_data.index)
hourly_data = daily_data.reindex(index)
hourly_data = hourly_data.groupby(hourly_data.index.day).transform(
lambda x: x.fillna(method='ffill', limit=23))
if divide:
hourly_data /= 24
return hourly_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dewpoint_temperature(temp, hum):
"""computes the dewpoint temperature Parameters ---- temp : temperature [K] hum : relative humidity Returns dewpoint temperature in K """ |
assert(temp.shape == hum.shape)
vap_press = vapor_pressure(temp, hum)
positives = np.array(temp >= 273.15)
dewpoint_temp = temp.copy() * np.nan
dewpoint_temp[positives] = 243.12 * np.log(vap_press[positives] / 6.112) / (17.62 - np.log(vap_press[positives] / 6.112))
dewpoint_temp[~positives] = 272.62 * np.log(vap_press[~positives] / 6.112) / (22.46 - np.log(vap_press[~positives] / 6.112))
return dewpoint_temp + 273.15 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def linregress(x, y, return_stats=False):
"""linear regression calculation Parameters ---- x : independent variable (series) y : dependent variable (series) return_stats : returns statistical values as well if required (bool) Returns ---- list of parameters (and statistics) """ |
a1, a0, r_value, p_value, stderr = scipy.stats.linregress(x, y)
retval = a1, a0
if return_stats:
retval += r_value, p_value, stderr
return retval |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def detect_gaps(dataframe, timestep, print_all=False, print_max=5, verbose=True):
"""checks if a given dataframe contains gaps and returns the number of gaps This funtion checks if a dataframe contains any gaps for a given temporal resolution that needs to be specified in seconds. The number of gaps detected in the dataframe is returned. Args: dataframe: A pandas dataframe object with index defined as datetime timestep (int):
The temporal resolution of the time series in seconds (e.g., 86400 for daily values) print_all (bool, opt):
Lists every gap on the screen print_mx (int, opt):
The maximum number of gaps listed on the screen in order to avoid a decrease in performance if numerous gaps occur verbose (bool, opt):
Enables/disables output to the screen Returns: The number of gaps as integer. Negative values indicate errors. """ |
gcount = 0
msg_counter = 0
warning_printed = False
try:
n = len(dataframe.index)
except:
print('Error: Invalid dataframe.')
return -1
for i in range(0, n):
if(i > 0):
time_diff = dataframe.index[i] - dataframe.index[i-1]
if(time_diff.delta/1E9 != timestep):
gcount += 1
if print_all or (msg_counter <= print_max - 1):
if verbose:
print('Warning: Gap in time series found between %s and %s' % (dataframe.index[i-1], dataframe.index[i]))
msg_counter += 1
if msg_counter == print_max and verbose and not warning_printed:
print('Waring: Only the first %i gaps have been listed. Try to increase print_max parameter to show more details.' % msg_counter)
warning_printed = True
if verbose:
print('%i gaps found in total.' % (gcount))
return gcount |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def drop_incomplete_days(dataframe, shift=0):
"""truncates a given dataframe to full days only This funtion truncates a given pandas dataframe (time series) to full days only, thus dropping leading and tailing hours of incomplete days. Please note that this methodology only applies to hourly time series. Args: dataframe: A pandas dataframe object with index defined as datetime shift (unsigned int, opt):
First hour of daily recordings. For daily recordings of precipitation gages, 8 would be the first hour of the subsequent day of recordings since daily totals are usually recorded at 7. Omit defining this parameter if you intend to pertain recordings to 0-23h. Returns: A dataframe with full days only. """ |
dropped = 0
if shift > 23 or shift < 0:
print("Invalid shift parameter setting! Using defaults.")
shift = 0
first = shift
last = first - 1
if last < 0:
last += 24
try:
# todo: move this checks to a separate function
n = len(dataframe.index)
except:
print('Error: Invalid dataframe.')
return dataframe
delete = list()
# drop heading lines if required
for i in range(0, n):
if dataframe.index.hour[i] == first and dataframe.index.minute[i] == 0:
break
else:
delete.append(i)
dropped += 1
# drop tailing lines if required
for i in range(n-1, 0, -1):
if dataframe.index.hour[i] == last and dataframe.index.minute[i] == 0:
break
else:
delete.append(i)
dropped += 1
# print("The following rows have been dropped (%i in total):" % dropped)
# print(delete)
return dataframe.drop(dataframe.index[[delete]]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def disagg_prec(dailyData, method='equal', cascade_options=None, hourly_data_obs=None, zerodiv="uniform", shift=0):
"""The disaggregation function for precipitation. Parameters dailyData : pd.Series daily data method : str method to disaggregate cascade_options : cascade object including statistical parameters for the cascade model hourly_data_obs : pd.Series observed hourly data of master station zerodiv : str method to deal with zero division by key "uniform" --> uniform distribution shift : int shifts the precipitation data by shift (int) steps (eg +7 for 7:00 to 6:00) """ |
if method not in ('equal', 'cascade', 'masterstation'):
raise ValueError('Invalid option')
if method == 'equal':
precip_disagg = melodist.distribute_equally(dailyData.precip,
divide=True)
elif method == 'masterstation':
precip_disagg = precip_master_station(dailyData,
hourly_data_obs,
zerodiv)
elif method == 'cascade':
assert cascade_options is not None
precip_disagg = disagg_prec_cascade(dailyData,
cascade_options,
shift=shift)
return precip_disagg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def precip_master_station(precip_daily, master_precip_hourly, zerodiv):
"""Disaggregate precipitation based on the patterns of a master station Parameters precip_daily : pd.Series daily data master_precip_hourly : pd.Series observed hourly data of the master station zerodiv : str method to deal with zero division by key "uniform" --> uniform distribution """ |
precip_hourly = pd.Series(index=melodist.util.hourly_index(precip_daily.index))
# set some parameters for cosine function
for index_d, precip in precip_daily.iteritems():
# get hourly data of the day
index = index_d.date().isoformat()
precip_h = master_precip_hourly[index]
# calc rel values and multiply by daily sums
# check for zero division
if precip_h.sum() != 0 and precip_h.sum() != np.isnan(precip_h.sum()):
precip_h_rel = (precip_h / precip_h.sum()) * precip
else:
# uniform option will preserve daily data by uniform distr
if zerodiv == 'uniform':
precip_h_rel = (1/24) * precip
else:
precip_h_rel = 0
# write the disaggregated day to data
precip_hourly[index] = precip_h_rel
return precip_hourly |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def seasonal_subset(dataframe,
months='all'):
'''Get the seasonal data.
Parameters
----------
dataframe : pd.DataFrame
months: int, str
Months to use for statistics, or 'all' for 1-12 (default='all')
'''
if isinstance(months, str) and months == 'all':
months = np.arange(12) + 1
for month_num, month in enumerate(months):
df_cur = dataframe[dataframe.index.month == month]
if month_num == 0:
df = df_cur
else:
df = df.append(df_cur)
return df.sort_index() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def build_casc(ObsData, hourly=True,level=9,
months=None,
avg_stats=True,
percentile=50):
'''Builds the cascade statistics of observed data for disaggregation
Parameters
-----------
ObsData : pd.Series
hourly=True -> hourly obs data
else -> 5min data (disaggregation level=9 (default), 10, 11)
months : numpy array of ints
Months for each seasons to be used for statistics (array of
numpy array, default=1-12, e.g., [np.arange(12) + 1])
avg_stats : bool
average statistics for all levels True/False (default=True)
percentile : int, float
percentile for splitting the dataset in small and high
intensities (default=50)
Returns
-------
list_seasonal_casc :
list holding the results
'''
list_seasonal_casc = list()
if months is None:
months = [np.arange(12) + 1]
# Parameter estimation for each season
for cur_months in months:
vdn = seasonal_subset(ObsData, cur_months)
if len(ObsData.precip[np.isnan(ObsData.precip)]) > 0:
ObsData.precip[np.isnan(ObsData.precip)] = 0
casc_opt = melodist.cascade.CascadeStatistics()
casc_opt.percentile = percentile
list_casc_opt = list()
count = 0
if hourly:
aggre_level = 5
else:
aggre_level = level
thresholds = np.zeros(aggre_level) #np.array([0., 0., 0., 0., 0.])
for i in range(0, aggre_level):
# aggregate the data
casc_opt_i, vdn = aggregate_precipitation(vdn, hourly, \
percentile=percentile)
thresholds[i] = casc_opt_i.threshold
copy_of_casc_opt_i = copy.copy(casc_opt_i)
list_casc_opt.append(copy_of_casc_opt_i)
n_vdn = len(vdn)
casc_opt_i * n_vdn # level related weighting
casc_opt + casc_opt_i # add to total statistics
count = count + n_vdn
casc_opt * (1. / count) # transfer weighted matrices to probabilities
casc_opt.threshold = thresholds
# statistics object
if avg_stats:
# in this case, the average statistics will be applied for all levels likewise
stat_obj = casc_opt
else:
# for longer time series, separate statistics might be more appropriate
# level dependent statistics will be assumed
stat_obj = list_casc_opt
list_seasonal_casc.append(stat_obj)
return list_seasonal_casc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fill_with_sample_data(self):
"""This function fills the corresponding object with sample data.""" |
# replace these sample data with another dataset later
# this function is deprecated as soon as a common file format for this
# type of data will be available
self.p01 = np.array([[0.576724636119866, 0.238722774405744, 0.166532122130638, 0.393474644666218],
[0.303345245644811, 0.0490956843857575, 0.0392403031072856, 0.228441890034704]])
self.p10 = np.array([[0.158217002255554, 0.256581140990052, 0.557852226779526, 0.422638238585814],
[0.0439831163244427, 0.0474928027621488, 0.303675296728195, 0.217512052135178]])
self.pxx = np.array([[0.265058361624580, 0.504696084604205, 0.275615651089836, 0.183887116747968],
[0.652671638030746, 0.903411512852094, 0.657084400164519, 0.554046057830118]])
self.wxx = np.array([[[0.188389148850583, 0.0806836453984190, 0.0698113025807722, 0.0621499191745602],
[0.240993281622128, 0.0831019646519721, 0.0415130545715575, 0.155284541403192]],
[[0.190128959522795, 0.129220679033862, 0.0932213021787505, 0.193080698516532],
[0.196379692358065, 0.108549414860949, 0.0592714297292217, 0.0421945385836429]],
[[0.163043672107111, 0.152063537378127, 0.102823783410167, 0.0906028835221283],
[0.186579466868095, 0.189705690316132, 0.0990207345993082, 0.107831389238912]],
[[0.197765724699431, 0.220046257566978, 0.177876233348082, 0.261288786454262],
[0.123823472714948, 0.220514673922285, 0.102486496386323, 0.101975538893918]],
[[0.114435243444815, 0.170857634762767, 0.177327072603662, 0.135362730582518],
[0.0939211776723413,0.174291820501902, 0.125275822078525, 0.150842841725936]],
[[0.0988683809545079, 0.152323481100248, 0.185606883566286, 0.167242856061538],
[0.0760275616817939, 0.127275603247149, 0.202466168603738, 0.186580243138018]],
[[0.0473688704207573, 0.0948047647595988, 0.193333422312280, 0.0902721256884624],
[0.0822753470826286, 0.0965608324996108, 0.369966294031327, 0.255290907016382]]]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def names(cls):
"""A list of all emoji names without file extension.""" |
if not cls._files:
for f in os.listdir(cls._image_path):
if(not f.startswith('.') and
os.path.isfile(os.path.join(cls._image_path, f))):
cls._files.append(os.path.splitext(f)[0])
return cls._files |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def replace_unicode(cls, replacement_string):
"""This method will iterate over every character in ``replacement_string`` and see if it mathces any of the unicode codepoints that we recognize. If it does then it will replace that codepoint with an image just like ``replace``. NOTE: This will only work with Python versions built with wide unicode caracter support. Python 3 should always work but Python 2 will have to tested before deploy. """ |
e = cls()
output = []
surrogate_character = None
if settings.EMOJI_REPLACE_HTML_ENTITIES:
replacement_string = cls.replace_html_entities(replacement_string)
for i, character in enumerate(replacement_string):
if character in cls._unicode_modifiers:
continue
# Check whether this is the first character in a Unicode
# surrogate pair when Python doesn't have wide Unicode
# support.
#
# Is there any reason to do this even if Python got wide
# support enabled?
if(not UNICODE_WIDE and not surrogate_character and
ord(character) >= UNICODE_SURROGATE_MIN and
ord(character) <= UNICODE_SURROGATE_MAX):
surrogate_character = character
continue
if surrogate_character:
character = convert_unicode_surrogates(
surrogate_character + character
)
surrogate_character = None
name = e.name_for(character)
if name:
if settings.EMOJI_ALT_AS_UNICODE:
character = e._image_string(name, alt=character)
else:
character = e._image_string(name)
output.append(character)
return ''.join(output) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _convert_to_unicode(string):
"""This method should work with both Python 2 and 3 with the caveat that they need to be compiled with wide unicode character support. If there isn't wide unicode character support it'll blow up with a warning. """ |
codepoints = []
for character in string.split('-'):
if character in BLACKLIST_UNICODE:
next
codepoints.append(
'\U{0:0>8}'.format(character).decode('unicode-escape')
)
return codepoints |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _delete_file(configurator, path):
""" remove file and remove it's directories if empty """ |
path = os.path.join(configurator.target_directory, path)
os.remove(path)
try:
os.removedirs(os.path.dirname(path))
except OSError:
pass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_requirements(filename):
"""Parses a file for pip installation requirements.""" |
with open(filename) as requirements_file:
contents = requirements_file.read()
return [line.strip() for line in contents.splitlines() if _is_requirement(line)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def assign_perm(perm, group):
""" Assigns a permission to a group """ |
if not isinstance(perm, Permission):
try:
app_label, codename = perm.split('.', 1)
except ValueError:
raise ValueError("For global permissions, first argument must be in"
" format: 'app_label.codename' (is %r)" % perm)
perm = Permission.objects.get(content_type__app_label=app_label, codename=codename)
group.permissions.add(perm)
return perm |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_perm(perm, group):
""" Removes a permission from a group """ |
if not isinstance(perm, Permission):
try:
app_label, codename = perm.split('.', 1)
except ValueError:
raise ValueError("For global permissions, first argument must be in"
" format: 'app_label.codename' (is %r)" % perm)
perm = Permission.objects.get(content_type__app_label=app_label, codename=codename)
group.permissions.remove(perm)
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_list_class(context, list):
""" Returns the class to use for the passed in list. We just build something up from the object type for the list. """ |
return "list_%s_%s" % (list.model._meta.app_label, list.model._meta.model_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def format_datetime(time):
""" Formats a date, converting the time to the user timezone if one is specified """ |
user_time_zone = timezone.get_current_timezone()
if time.tzinfo is None:
time = time.replace(tzinfo=pytz.utc)
user_time_zone = pytz.timezone(getattr(settings, 'USER_TIME_ZONE', 'GMT'))
time = time.astimezone(user_time_zone)
return time.strftime("%b %d, %Y %H:%M") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_value_from_view(context, field):
""" Responsible for deriving the displayed value for the passed in 'field'. This first checks for a particular method on the ListView, then looks for a method on the object, then finally treats it as an attribute. """ |
view = context['view']
obj = None
if 'object' in context:
obj = context['object']
value = view.lookup_field_value(context, obj, field)
# it's a date
if type(value) == datetime:
return format_datetime(value)
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_class(context, field, obj=None):
""" Looks up the class for this field """ |
view = context['view']
return view.lookup_field_class(field, obj, "field_" + field) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_label(context, field, obj=None):
""" Responsible for figuring out the right label for the passed in field. The order of precedence is: 1) if the view has a field_config and a label specified there, use that label 2) check for a form in the view, if it contains that field, use it's value """ |
view = context['view']
return view.lookup_field_label(context, field, obj) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_field_link(context, field, obj=None):
""" Determine what the field link should be for the given field, object pair """ |
view = context['view']
return view.lookup_field_link(context, field, obj) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_permissions_app_name():
""" Gets the app after which smartmin permissions should be installed. This can be specified by PERMISSIONS_APP in the Django settings or defaults to the last app with models """ |
global permissions_app_name
if not permissions_app_name:
permissions_app_name = getattr(settings, 'PERMISSIONS_APP', None)
if not permissions_app_name:
app_names_with_models = [a.name for a in apps.get_app_configs() if a.models_module is not None]
if app_names_with_models:
permissions_app_name = app_names_with_models[-1]
return permissions_app_name |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_all_group_permissions(sender, **kwargs):
""" Checks that all the permissions specified in our settings.py are set for our groups. """ |
if not is_permissions_app(sender):
return
config = getattr(settings, 'GROUP_PERMISSIONS', dict())
# for each of our items
for name, permissions in config.items():
# get or create the group
(group, created) = Group.objects.get_or_create(name=name)
if created:
pass
check_role_permissions(group, permissions, group.permissions.all()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_permission(content_type, permission):
""" Adds the passed in permission to that content type. Note that the permission passed in should be a single word, or verb. The proper 'codename' will be generated from that. """ |
# build our permission slug
codename = "%s_%s" % (content_type.model, permission)
# sys.stderr.write("Checking %s permission for %s\n" % (permission, content_type.name))
# does it already exist
if not Permission.objects.filter(content_type=content_type, codename=codename):
Permission.objects.create(content_type=content_type,
codename=codename,
name="Can %s %s" % (permission, content_type.name)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_all_permissions(sender, **kwargs):
""" This syncdb checks our PERMISSIONS setting in settings.py and makes sure all those permissions actually exit. """ |
if not is_permissions_app(sender):
return
config = getattr(settings, 'PERMISSIONS', dict())
# for each of our items
for natural_key, permissions in config.items():
# if the natural key '*' then that means add to all objects
if natural_key == '*':
# for each of our content types
for content_type in ContentType.objects.all():
for permission in permissions:
add_permission(content_type, permission)
# otherwise, this is on a specific content type, add for each of those
else:
app, model = natural_key.split('.')
try:
content_type = ContentType.objects.get_by_natural_key(app, model)
except ContentType.DoesNotExist:
continue
# add each permission
for permission in permissions:
add_permission(content_type, permission) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save(self, commit=True):
""" Overloaded so we can save any new password that is included. """ |
is_new_user = self.instance.pk is None
user = super(UserForm, self).save(commit)
# new users should be made active by default
if is_new_user:
user.is_active = True
# if we had a new password set, use it
new_pass = self.cleaned_data['new_password']
if new_pass:
user.set_password(new_pass)
if commit:
user.save()
return user |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def derive_single_object_url_pattern(slug_url_kwarg, path, action):
""" Utility function called by class methods for single object views """ |
if slug_url_kwarg:
return r'^%s/%s/(?P<%s>[^/]+)/$' % (path, action, slug_url_kwarg)
else:
return r'^%s/%s/(?P<pk>\d+)/$' % (path, action) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dispatch(self, request, *args, **kwargs):
""" Overloaded to check permissions if appropriate """ |
def wrapper(request, *args, **kwargs):
if not self.has_permission(request, *args, **kwargs):
path = urlquote(request.get_full_path())
login_url = kwargs.pop('login_url', settings.LOGIN_URL)
redirect_field_name = kwargs.pop('redirect_field_name', REDIRECT_FIELD_NAME)
return HttpResponseRedirect("%s?%s=%s" % (login_url, redirect_field_name, path))
else:
response = self.pre_process(request, *args, **kwargs)
if not response:
return super(SmartView, self).dispatch(request, *args, **kwargs)
else:
return response
return wrapper(request, *args, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lookup_obj_attribute(self, obj, field):
""" Looks for a field's value from the passed in obj. Note that this will strip leading attributes to deal with subelements if possible """ |
curr_field = field.encode('ascii', 'ignore').decode("utf-8")
rest = None
if field.find('.') >= 0:
curr_field = field.split('.')[0]
rest = '.'.join(field.split('.')[1:])
# next up is the object itself
obj_field = getattr(obj, curr_field, None)
# if it is callable, do so
if obj_field and getattr(obj_field, '__call__', None):
obj_field = obj_field()
if obj_field and rest:
return self.lookup_obj_attribute(obj_field, rest)
else:
return obj_field |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lookup_field_value(self, context, obj, field):
""" Looks up the field value for the passed in object and field name. Note that this method is actually called from a template, but this provides a hook for subclasses to modify behavior if they wish to do so. This may be used for example to change the display value of a variable depending on other variables within our context. """ |
curr_field = field.encode('ascii', 'ignore').decode("utf-8")
# if this isn't a subfield, check the view to see if it has a get_ method
if field.find('.') == -1:
# view supercedes all, does it have a 'get_' method for this obj
view_method = getattr(self, 'get_%s' % curr_field, None)
if view_method:
return view_method(obj)
return self.lookup_obj_attribute(obj, field) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lookup_field_class(self, field, obj=None, default=None):
""" Looks up any additional class we should include when rendering this field """ |
css = ""
# is there a class specified for this field
if field in self.field_config and 'class' in self.field_config[field]:
css = self.field_config[field]['class']
# if we were given a default, use that
elif default:
css = default
return css |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_template_names(self):
""" Returns the name of the template to use to render this request. Smartmin provides default templates as fallbacks, so appends it's own templates names to the end of whatever list is built by the generic views. Subclasses can override this by setting a 'template_name' variable on the class. """ |
templates = []
if getattr(self, 'template_name', None):
templates.append(self.template_name)
if getattr(self, 'default_template', None):
templates.append(self.default_template)
else:
templates = super(SmartView, self).get_template_names()
return templates |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_context_data(self, **kwargs):
""" We supplement the normal context data by adding our fields and labels. """ |
context = super(SmartView, self).get_context_data(**kwargs)
# derive our field config
self.field_config = self.derive_field_config()
# add our fields
self.fields = self.derive_fields()
# build up our current parameter string, EXCLUSIVE of our page. These
# are used to build pagination URLs
url_params = "?"
order_params = ""
for key in self.request.GET.keys():
if key != 'page' and key != 'pjax' and (len(key) == 0 or key[0] != '_'):
for value in self.request.GET.getlist(key):
url_params += "%s=%s&" % (key, urlquote(value))
elif key == '_order':
order_params = "&".join(["%s=%s" % (key, _) for _ in self.request.GET.getlist(key)])
context['url_params'] = url_params
context['order_params'] = order_params + "&"
context['pjax'] = self.pjax
# set our blocks
context['blocks'] = dict()
# stuff it all in our context
context['fields'] = self.fields
context['view'] = self
context['field_config'] = self.field_config
context['title'] = self.derive_title()
# and any extra context the user specified
context.update(self.extra_context)
# by default, our base is 'base.html', but we might be pjax
base_template = "base.html"
if 'pjax' in self.request.GET or 'pjax' in self.request.POST:
base_template = "smartmin/pjax.html"
if 'HTTP_X_PJAX' in self.request.META:
base_template = "smartmin/pjax.html"
context['base_template'] = base_template
# set our refresh if we have one
refresh = self.derive_refresh()
if refresh:
context['refresh'] = refresh
return context |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def derive_fields(self):
""" Derives our fields. We first default to using our 'fields' variable if available, otherwise we figure it out from our object. """ |
if self.fields:
return list(self.fields)
else:
fields = []
for field in self.object._meta.fields:
fields.append(field.name)
# only exclude? then remove those items there
exclude = self.derive_exclude()
# remove any excluded fields
fields = [field for field in fields if field not in exclude]
return fields |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_context_data(self, **kwargs):
""" Add in the field to use for the name field """ |
context = super(SmartDeleteView, self).get_context_data(**kwargs)
context['name_field'] = self.name_field
context['cancel_url'] = self.get_cancel_url()
return context |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def derive_title(self):
""" Derives our title from our list """ |
title = super(SmartListView, self).derive_title()
if not title:
return force_text(self.model._meta.verbose_name_plural).title()
else:
return title |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lookup_field_orderable(self, field):
""" Returns whether the passed in field is sortable or not, by default all 'raw' fields, that is fields that are part of the model are sortable. """ |
try:
self.model._meta.get_field_by_name(field)
return True
except Exception:
# that field doesn't exist, so not sortable
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_context_data(self, **kwargs):
""" Add in what fields are linkable """ |
context = super(SmartListView, self).get_context_data(**kwargs)
# our linkable fields
self.link_fields = self.derive_link_fields(context)
# stuff it all in our context
context['link_fields'] = self.link_fields
# our search term if any
if 'search' in self.request.GET:
context['search'] = self.request.GET['search']
# our ordering field if any
order = self.derive_ordering()
if order:
if order[0] == '-':
context['order'] = order[1:]
context['order_asc'] = False
else:
context['order'] = order
context['order_asc'] = True
return context |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def derive_queryset(self, **kwargs):
""" Derives our queryset. """ |
# get our parent queryset
queryset = super(SmartListView, self).get_queryset(**kwargs)
# apply any filtering
search_fields = self.derive_search_fields()
search_query = self.request.GET.get('search')
if search_fields and search_query:
term_queries = []
for term in search_query.split(' '):
field_queries = []
for field in search_fields:
field_queries.append(Q(**{field: term}))
term_queries.append(reduce(operator.or_, field_queries))
queryset = queryset.filter(reduce(operator.and_, term_queries))
# add any select related
related = self.derive_select_related()
if related:
queryset = queryset.select_related(*related)
# return our queryset
return queryset |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_queryset(self, **kwargs):
""" Gets our queryset. This takes care of filtering if there are any fields to filter by. """ |
queryset = self.derive_queryset(**kwargs)
return self.order_queryset(queryset) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def order_queryset(self, queryset):
""" Orders the passed in queryset, returning a new queryset in response. By default uses the _order query parameter. """ |
order = self.derive_ordering()
# if we get our order from the request
# make sure it is a valid field in the list
if '_order' in self.request.GET:
if order.lstrip('-') not in self.derive_fields():
order = None
if order:
# if our order is a single string, convert to a simple list
if isinstance(order, str):
order = (order,)
queryset = queryset.order_by(*order)
return queryset |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def customize_form_field(self, name, field):
""" Allows views to customize their form fields. By default, Smartmin replaces the plain textbox date input with it's own DatePicker implementation. """ |
if isinstance(field, forms.fields.DateField) and isinstance(field.widget, forms.widgets.DateInput):
field.widget = widgets.DatePickerWidget()
field.input_formats = [field.widget.input_format[1]] + list(field.input_formats)
if isinstance(field, forms.fields.ImageField) and isinstance(field.widget, forms.widgets.ClearableFileInput):
field.widget = widgets.ImageThumbnailWidget()
return field |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def derive_readonly(self):
""" Figures out what fields should be readonly. We iterate our field_config to find all that have a readonly of true """ |
readonly = list(self.readonly)
for key, value in self.field_config.items():
if 'readonly' in value and value['readonly']:
readonly.append(key)
return readonly |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_form_class(self):
""" Returns the form class to use in this view """ |
if self.form_class:
form_class = self.form_class
else:
if self.model is not None:
# If a model has been explicitly provided, use it
model = self.model
elif hasattr(self, 'object') and self.object is not None:
# If this view is operating on a single object, use
# the class of that object
model = self.object.__class__
else:
# Try to get a queryset and extract the model class
# from that
model = self.get_queryset().model
# run time parameters when building our form
factory_kwargs = self.get_factory_kwargs()
form_class = model_forms.modelform_factory(model, **factory_kwargs)
return form_class |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_factory_kwargs(self):
""" Let's us specify any extra parameters we might want to call for our form factory. These can include: 'form', 'fields', 'exclude' or 'formfield_callback' """ |
params = dict()
exclude = self.derive_exclude()
exclude += self.derive_readonly()
if self.fields:
fields = list(self.fields)
for ex in exclude:
if ex in fields:
fields.remove(ex)
params['fields'] = fields
if exclude:
params['exclude'] = exclude
return params |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_success_url(self):
""" By default we use the referer that was stuffed in our form when it was created """ |
if self.success_url:
# if our smart url references an object, pass that in
if self.success_url.find('@') > 0:
return smart_url(self.success_url, self.object)
else:
return smart_url(self.success_url, None)
elif 'loc' in self.form.cleaned_data:
return self.form.cleaned_data['loc']
raise ImproperlyConfigured("No redirect location found, override get_success_url to not use redirect urls") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_form_kwargs(self):
""" We override this, using only those fields specified if they are specified. Otherwise we include all fields in a standard ModelForm. """ |
kwargs = super(SmartFormMixin, self).get_form_kwargs()
kwargs['initial'] = self.derive_initial()
return kwargs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def derive_title(self):
""" Derives our title from our object """ |
if not self.title:
return _("Create %s") % force_text(self.model._meta.verbose_name).title()
else:
return self.title |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def permission_for_action(self, action):
""" Returns the permission to use for the passed in action """ |
return "%s.%s_%s" % (self.app_name.lower(), self.model_name.lower(), action) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def template_for_action(self, action):
""" Returns the template to use for the passed in action """ |
return "%s/%s_%s.html" % (self.module_name.lower(), self.model_name.lower(), action) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def url_name_for_action(self, action):
""" Returns the reverse name for this action """ |
return "%s.%s_%s" % (self.module_name.lower(), self.model_name.lower(), action) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pattern_for_view(self, view, action):
""" Returns the URL pattern for the passed in action. """ |
# if this view knows how to define a URL pattern, call that
if getattr(view, 'derive_url_pattern', None):
return view.derive_url_pattern(self.path, action)
# otherwise take our best guess
else:
return r'^%s/%s/$' % (self.path, action) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_urlpatterns(self):
""" Creates the appropriate URLs for this object. """ |
urls = []
# for each of our actions
for action in self.actions:
view_class = self.view_for_action(action)
view_pattern = self.pattern_for_view(view_class, action)
name = self.url_name_for_action(action)
urls.append(url(view_pattern, view_class.as_view(), name=name))
return urls |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_migrations(self):
# pragma: no cover """ Loads all migrations in the order they would be applied to a clean database """ |
executor = MigrationExecutor(connection=None)
# create the forwards plan Django would follow on an empty database
plan = executor.migration_plan(executor.loader.graph.leaf_nodes(), clean_start=True)
if self.verbosity >= 2:
for migration, _ in plan:
self.stdout.write(" > %s" % migration)
return [m[0] for m in plan] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def extract_operations(self, migrations):
""" Extract SQL operations from the given migrations """ |
operations = []
for migration in migrations:
for operation in migration.operations:
if isinstance(operation, RunSQL):
statements = sqlparse.parse(dedent(operation.sql))
for statement in statements:
operation = SqlObjectOperation.parse(statement)
if operation:
operations.append(operation)
if self.verbosity >= 2:
self.stdout.write(" > % -100s (%s)" % (operation, migration))
return operations |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_type_dumps(self, operations, preserve_order, output_dir):
""" Splits the list of SQL operations by type and dumps these to separate files """ |
by_type = {SqlType.INDEX: [], SqlType.FUNCTION: [], SqlType.TRIGGER: []}
for operation in operations:
by_type[operation.sql_type].append(operation)
# optionally sort each operation list by the object name
if not preserve_order:
for obj_type, ops in by_type.items():
by_type[obj_type] = sorted(ops, key=lambda o: o.obj_name)
if by_type[SqlType.INDEX]:
self.write_dump('indexes', by_type[SqlType.INDEX], output_dir)
if by_type[SqlType.FUNCTION]:
self.write_dump('functions', by_type[SqlType.FUNCTION], output_dir)
if by_type[SqlType.TRIGGER]:
self.write_dump('triggers', by_type[SqlType.TRIGGER], output_dir) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.