body_hash
stringlengths 64
64
| body
stringlengths 23
109k
| docstring
stringlengths 1
57k
| path
stringlengths 4
198
| name
stringlengths 1
115
| repository_name
stringlengths 7
111
| repository_stars
float64 0
191k
| lang
stringclasses 1
value | body_without_docstring
stringlengths 14
108k
| unified
stringlengths 45
133k
|
---|---|---|---|---|---|---|---|---|---|
225107fe36f315f576511284033cbad4c525cdd95bb352893a73a8d24ea5f21f | @property
def marker(self):
'Gets the marker of this ListDevicesRequest.\n\n **参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。\n\n :return: The marker of this ListDevicesRequest.\n :rtype: str\n '
return self._marker | Gets the marker of this ListDevicesRequest.
**参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。
:return: The marker of this ListDevicesRequest.
:rtype: str | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | marker | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @property
def marker(self):
'Gets the marker of this ListDevicesRequest.\n\n **参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。\n\n :return: The marker of this ListDevicesRequest.\n :rtype: str\n '
return self._marker | @property
def marker(self):
'Gets the marker of this ListDevicesRequest.\n\n **参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。\n\n :return: The marker of this ListDevicesRequest.\n :rtype: str\n '
return self._marker<|docstring|>Gets the marker of this ListDevicesRequest.
**参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。
:return: The marker of this ListDevicesRequest.
:rtype: str<|endoftext|> |
66f0272658daacf96ba1447e4ab8e8ea97d5489292a7f4dbe47add3eb9a15a17 | @marker.setter
def marker(self, marker):
'Sets the marker of this ListDevicesRequest.\n\n **参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。\n\n :param marker: The marker of this ListDevicesRequest.\n :type: str\n '
self._marker = marker | Sets the marker of this ListDevicesRequest.
**参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。
:param marker: The marker of this ListDevicesRequest.
:type: str | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | marker | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @marker.setter
def marker(self, marker):
'Sets the marker of this ListDevicesRequest.\n\n **参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。\n\n :param marker: The marker of this ListDevicesRequest.\n :type: str\n '
self._marker = marker | @marker.setter
def marker(self, marker):
'Sets the marker of this ListDevicesRequest.\n\n **参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。\n\n :param marker: The marker of this ListDevicesRequest.\n :type: str\n '
self._marker = marker<|docstring|>Sets the marker of this ListDevicesRequest.
**参数说明**:上一次分页查询结果中最后一条记录的ID,在上一次分页查询时由物联网平台返回获得。分页查询时物联网平台是按marker也就是记录ID降序查询的,越新的数据记录ID也会越大。若填写marker,则本次只查询记录ID小于marker的数据记录。若不填写,则从记录ID最大也就是最新的一条数据开始查询。如果需要依次查询所有数据,则每次查询时必须填写上一次查询响应中的marker值。 **取值范围**:长度为24的十六进制字符串,默认值为ffffffffffffffffffffffff。
:param marker: The marker of this ListDevicesRequest.
:type: str<|endoftext|> |
94b8bdbb07dd299f483611d439ad74022cd478e0175d9fbb2ce3f45c96f1d8c5 | @property
def offset(self):
'Gets the offset of this ListDevicesRequest.\n\n **参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。\n\n :return: The offset of this ListDevicesRequest.\n :rtype: int\n '
return self._offset | Gets the offset of this ListDevicesRequest.
**参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。
:return: The offset of this ListDevicesRequest.
:rtype: int | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | offset | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @property
def offset(self):
'Gets the offset of this ListDevicesRequest.\n\n **参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。\n\n :return: The offset of this ListDevicesRequest.\n :rtype: int\n '
return self._offset | @property
def offset(self):
'Gets the offset of this ListDevicesRequest.\n\n **参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。\n\n :return: The offset of this ListDevicesRequest.\n :rtype: int\n '
return self._offset<|docstring|>Gets the offset of this ListDevicesRequest.
**参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。
:return: The offset of this ListDevicesRequest.
:rtype: int<|endoftext|> |
36445016daa5600500bdff6b181a4ce037cbe29b28b449e0f326f8886fa621ff | @offset.setter
def offset(self, offset):
'Sets the offset of this ListDevicesRequest.\n\n **参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。\n\n :param offset: The offset of this ListDevicesRequest.\n :type: int\n '
self._offset = offset | Sets the offset of this ListDevicesRequest.
**参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。
:param offset: The offset of this ListDevicesRequest.
:type: int | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | offset | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @offset.setter
def offset(self, offset):
'Sets the offset of this ListDevicesRequest.\n\n **参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。\n\n :param offset: The offset of this ListDevicesRequest.\n :type: int\n '
self._offset = offset | @offset.setter
def offset(self, offset):
'Sets the offset of this ListDevicesRequest.\n\n **参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。\n\n :param offset: The offset of this ListDevicesRequest.\n :type: int\n '
self._offset = offset<|docstring|>Sets the offset of this ListDevicesRequest.
**参数说明**:表示从marker后偏移offset条记录开始查询。默认为0,取值范围为0-500的整数。当offset为0时,表示从marker后第一条记录开始输出。限制offset最大值是出于API性能考虑,您可以搭配marker使用该参数实现翻页,例如每页50条记录,1-11页内都可以直接使用offset跳转到指定页,但到11页后,由于offset限制为500,您需要使用第11页返回的marker作为下次查询的marker,以实现翻页到12-22页。 **取值范围**:0-500的整数,默认为0。
:param offset: The offset of this ListDevicesRequest.
:type: int<|endoftext|> |
1f497a68fe0d77f72bb8eb241822aa90e16372b7cf5f79e0e223e234aeaf2801 | @property
def start_time(self):
"Gets the start_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :return: The start_time of this ListDevicesRequest.\n :rtype: str\n "
return self._start_time | Gets the start_time of this ListDevicesRequest.
**参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。
:return: The start_time of this ListDevicesRequest.
:rtype: str | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | start_time | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @property
def start_time(self):
"Gets the start_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :return: The start_time of this ListDevicesRequest.\n :rtype: str\n "
return self._start_time | @property
def start_time(self):
"Gets the start_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :return: The start_time of this ListDevicesRequest.\n :rtype: str\n "
return self._start_time<|docstring|>Gets the start_time of this ListDevicesRequest.
**参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。
:return: The start_time of this ListDevicesRequest.
:rtype: str<|endoftext|> |
4fadb4ed7dd78bb201887820e417f08ae8b3446158d712158eba3b594ea48725 | @start_time.setter
def start_time(self, start_time):
"Sets the start_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :param start_time: The start_time of this ListDevicesRequest.\n :type: str\n "
self._start_time = start_time | Sets the start_time of this ListDevicesRequest.
**参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。
:param start_time: The start_time of this ListDevicesRequest.
:type: str | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | start_time | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @start_time.setter
def start_time(self, start_time):
"Sets the start_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :param start_time: The start_time of this ListDevicesRequest.\n :type: str\n "
self._start_time = start_time | @start_time.setter
def start_time(self, start_time):
"Sets the start_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :param start_time: The start_time of this ListDevicesRequest.\n :type: str\n "
self._start_time = start_time<|docstring|>Sets the start_time of this ListDevicesRequest.
**参数说明**:查询设备注册时间在startTime之后的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。
:param start_time: The start_time of this ListDevicesRequest.
:type: str<|endoftext|> |
e4ced670e82d8d7d5855a445c01b396fc4b1e7d8c0e4d395dfdca8a0c04b2d58 | @property
def end_time(self):
"Gets the end_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :return: The end_time of this ListDevicesRequest.\n :rtype: str\n "
return self._end_time | Gets the end_time of this ListDevicesRequest.
**参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。
:return: The end_time of this ListDevicesRequest.
:rtype: str | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | end_time | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @property
def end_time(self):
"Gets the end_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :return: The end_time of this ListDevicesRequest.\n :rtype: str\n "
return self._end_time | @property
def end_time(self):
"Gets the end_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :return: The end_time of this ListDevicesRequest.\n :rtype: str\n "
return self._end_time<|docstring|>Gets the end_time of this ListDevicesRequest.
**参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。
:return: The end_time of this ListDevicesRequest.
:rtype: str<|endoftext|> |
17a14b900c9874a3a69fc8d6b592e41b0416caefff8a8e1beb273c7850113a2d | @end_time.setter
def end_time(self, end_time):
"Sets the end_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :param end_time: The end_time of this ListDevicesRequest.\n :type: str\n "
self._end_time = end_time | Sets the end_time of this ListDevicesRequest.
**参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。
:param end_time: The end_time of this ListDevicesRequest.
:type: str | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | end_time | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @end_time.setter
def end_time(self, end_time):
"Sets the end_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :param end_time: The end_time of this ListDevicesRequest.\n :type: str\n "
self._end_time = end_time | @end_time.setter
def end_time(self, end_time):
"Sets the end_time of this ListDevicesRequest.\n\n **参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。\n\n :param end_time: The end_time of this ListDevicesRequest.\n :type: str\n "
self._end_time = end_time<|docstring|>Sets the end_time of this ListDevicesRequest.
**参数说明**:查询设备注册时间在endTime之前的记录,格式:yyyyMMdd'T'HHmmss'Z',如20151212T121212Z。
:param end_time: The end_time of this ListDevicesRequest.
:type: str<|endoftext|> |
a00e9e6584805e203b23795f2cb9642790292e0884ab855b80f4d6ad2d46e48b | @property
def app_id(self):
'Gets the app_id of this ListDevicesRequest.\n\n **参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。\n\n :return: The app_id of this ListDevicesRequest.\n :rtype: str\n '
return self._app_id | Gets the app_id of this ListDevicesRequest.
**参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。
:return: The app_id of this ListDevicesRequest.
:rtype: str | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | app_id | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @property
def app_id(self):
'Gets the app_id of this ListDevicesRequest.\n\n **参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。\n\n :return: The app_id of this ListDevicesRequest.\n :rtype: str\n '
return self._app_id | @property
def app_id(self):
'Gets the app_id of this ListDevicesRequest.\n\n **参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。\n\n :return: The app_id of this ListDevicesRequest.\n :rtype: str\n '
return self._app_id<|docstring|>Gets the app_id of this ListDevicesRequest.
**参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。
:return: The app_id of this ListDevicesRequest.
:rtype: str<|endoftext|> |
8c8488a6ef4f0d1b777412caeba3d62db61be64da196769a8007425b67641e1f | @app_id.setter
def app_id(self, app_id):
'Sets the app_id of this ListDevicesRequest.\n\n **参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。\n\n :param app_id: The app_id of this ListDevicesRequest.\n :type: str\n '
self._app_id = app_id | Sets the app_id of this ListDevicesRequest.
**参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。
:param app_id: The app_id of this ListDevicesRequest.
:type: str | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | app_id | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | @app_id.setter
def app_id(self, app_id):
'Sets the app_id of this ListDevicesRequest.\n\n **参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。\n\n :param app_id: The app_id of this ListDevicesRequest.\n :type: str\n '
self._app_id = app_id | @app_id.setter
def app_id(self, app_id):
'Sets the app_id of this ListDevicesRequest.\n\n **参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。\n\n :param app_id: The app_id of this ListDevicesRequest.\n :type: str\n '
self._app_id = app_id<|docstring|>Sets the app_id of this ListDevicesRequest.
**参数说明**:资源空间ID。此参数为非必选参数,存在多资源空间的用户需要使用该接口时,可以携带该参数查询指定资源空间下的设备列表,不携带该参数则会查询该用户下所有设备列表。 **取值范围**:长度不超过36,只允许字母、数字、下划线(_)、连接符(-)的组合。
:param app_id: The app_id of this ListDevicesRequest.
:type: str<|endoftext|> |
23795442a46e2cd10dec98fded44ed9172a29971e98983a30ad89baa6c9c0a03 | def to_dict(self):
'Returns the model properties as a dict'
result = {}
for (attr, _) in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
elif hasattr(value, 'to_dict'):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map((lambda item: ((item[0], item[1].to_dict()) if hasattr(item[1], 'to_dict') else item)), value.items()))
elif (attr in self.sensitive_list):
result[attr] = '****'
else:
result[attr] = value
return result | Returns the model properties as a dict | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | to_dict | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | def to_dict(self):
result = {}
for (attr, _) in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
elif hasattr(value, 'to_dict'):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map((lambda item: ((item[0], item[1].to_dict()) if hasattr(item[1], 'to_dict') else item)), value.items()))
elif (attr in self.sensitive_list):
result[attr] = '****'
else:
result[attr] = value
return result | def to_dict(self):
result = {}
for (attr, _) in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
elif hasattr(value, 'to_dict'):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map((lambda item: ((item[0], item[1].to_dict()) if hasattr(item[1], 'to_dict') else item)), value.items()))
elif (attr in self.sensitive_list):
result[attr] = '****'
else:
result[attr] = value
return result<|docstring|>Returns the model properties as a dict<|endoftext|> |
a85eb2dd57daf3998acb705f217af08ef0b14fd68fee87605500331b1a5f2987 | def to_str(self):
'Returns the string representation of the model'
import simplejson as json
if six.PY2:
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
return json.dumps(sanitize_for_serialization(self), ensure_ascii=False) | Returns the string representation of the model | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | to_str | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | def to_str(self):
import simplejson as json
if six.PY2:
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
return json.dumps(sanitize_for_serialization(self), ensure_ascii=False) | def to_str(self):
import simplejson as json
if six.PY2:
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
return json.dumps(sanitize_for_serialization(self), ensure_ascii=False)<|docstring|>Returns the string representation of the model<|endoftext|> |
122cefd5382ee9078015a8ccdeba1aa42a0625442bf0dcfc7748dc07a3e45d3f | def __repr__(self):
'For `print`'
return self.to_str() | For `print` | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | __repr__ | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | def __repr__(self):
return self.to_str() | def __repr__(self):
return self.to_str()<|docstring|>For `print`<|endoftext|> |
074ec605d5b9c159008f3cd97f7b645a9b7bf5c4c1b8a0fd67b702a8a6532609 | def __eq__(self, other):
'Returns true if both objects are equal'
if (not isinstance(other, ListDevicesRequest)):
return False
return (self.__dict__ == other.__dict__) | Returns true if both objects are equal | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | __eq__ | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | def __eq__(self, other):
if (not isinstance(other, ListDevicesRequest)):
return False
return (self.__dict__ == other.__dict__) | def __eq__(self, other):
if (not isinstance(other, ListDevicesRequest)):
return False
return (self.__dict__ == other.__dict__)<|docstring|>Returns true if both objects are equal<|endoftext|> |
43dc6740163eb9fc1161d09cb2208a64c7ad0cc8d9c8637ac3264522d3ec7e42 | def __ne__(self, other):
'Returns true if both objects are not equal'
return (not (self == other)) | Returns true if both objects are not equal | huaweicloud-sdk-iotda/huaweicloudsdkiotda/v5/model/list_devices_request.py | __ne__ | huaweicloud/huaweicloud-sdk-python-v3 | 64 | python | def __ne__(self, other):
return (not (self == other)) | def __ne__(self, other):
return (not (self == other))<|docstring|>Returns true if both objects are not equal<|endoftext|> |
2d3f3ab91a855841e76fa84a6fb027329e0b188a56fa51df7c079149c5d764d9 | def load_environment(name: str, import_library=True) -> abstractions.RlEnvironment:
'Dummy load dataset function. Placeholder for real wrapper.'
if (not import_library):
return abstractions.RlEnvironment(name)
else:
source = 'gym'
wrapper = importlib.import_module((('sotaai.rl.' + source) + '_wrapper'))
raw_object = wrapper.load_environment(name)
return abstractions.RlEnvironment(name, raw_object) | Dummy load dataset function. Placeholder for real wrapper. | sotaai/rl/__init__.py | load_environment | stateoftheartai/sotaai | 23 | python | def load_environment(name: str, import_library=True) -> abstractions.RlEnvironment:
if (not import_library):
return abstractions.RlEnvironment(name)
else:
source = 'gym'
wrapper = importlib.import_module((('sotaai.rl.' + source) + '_wrapper'))
raw_object = wrapper.load_environment(name)
return abstractions.RlEnvironment(name, raw_object) | def load_environment(name: str, import_library=True) -> abstractions.RlEnvironment:
if (not import_library):
return abstractions.RlEnvironment(name)
else:
source = 'gym'
wrapper = importlib.import_module((('sotaai.rl.' + source) + '_wrapper'))
raw_object = wrapper.load_environment(name)
return abstractions.RlEnvironment(name, raw_object)<|docstring|>Dummy load dataset function. Placeholder for real wrapper.<|endoftext|> |
da2443271e7454c421e52479d37ef49b22faea20db9f20f1a463f4836158ea23 | def load_model(name: str, name_env: str='CartPole-v1', import_library=True) -> abstractions.RlModel:
'Dummy load model function. Placeholder for real wrapper.'
source = 'garage'
if (not import_library):
return abstractions.RlModel(name=name)
else:
wrapper = importlib.import_module((('sotaai.rl.' + source) + '_wrapper'))
raw_object = wrapper.load_model(name, name_env=name_env)
env = load_environment(name=name_env)
return abstractions.RlModel(name=name, raw_algo=raw_object, source=source, environment=env) | Dummy load model function. Placeholder for real wrapper. | sotaai/rl/__init__.py | load_model | stateoftheartai/sotaai | 23 | python | def load_model(name: str, name_env: str='CartPole-v1', import_library=True) -> abstractions.RlModel:
source = 'garage'
if (not import_library):
return abstractions.RlModel(name=name)
else:
wrapper = importlib.import_module((('sotaai.rl.' + source) + '_wrapper'))
raw_object = wrapper.load_model(name, name_env=name_env)
env = load_environment(name=name_env)
return abstractions.RlModel(name=name, raw_algo=raw_object, source=source, environment=env) | def load_model(name: str, name_env: str='CartPole-v1', import_library=True) -> abstractions.RlModel:
source = 'garage'
if (not import_library):
return abstractions.RlModel(name=name)
else:
wrapper = importlib.import_module((('sotaai.rl.' + source) + '_wrapper'))
raw_object = wrapper.load_model(name, name_env=name_env)
env = load_environment(name=name_env)
return abstractions.RlModel(name=name, raw_algo=raw_object, source=source, environment=env)<|docstring|>Dummy load model function. Placeholder for real wrapper.<|endoftext|> |
0cbf6e0df0eb8a1d4d811048e84a249cf1d6bb4f910971796b7d6581584e2ac3 | def create_models_dict(model_names, models_sources_map, import_library=False, log=False):
"Given a list of model names, return a list with the JSON representation\n of each model as an standardized dict\n Args:\n model_names (list): list of model names to return the standardized dict\n models_sources_map: a dict map between model names and sources as returned\n by the utils function map_name_sources('models')\n Returns:\n A list of dictionaries with the JSON representation of each CV model\n "
models = []
for (i, model_name) in enumerate(model_names):
if log:
print(' - ({}/{}) {}'.format((i + 1), len(model_names), ('models.' + model_name)))
model = load_model(name=model_name, import_library=import_library)
model_dict = model.to_dict()
model_dict['sources'] = models_sources_map[model_dict['name']]
del model_dict['source']
model_dict['implemented_sources'] = []
if ('garage' in model_dict['sources']):
model_dict['implemented_sources'] = ['garage']
model_dict['unified_name'] = model_name
models.append(model_dict)
return models | Given a list of model names, return a list with the JSON representation
of each model as an standardized dict
Args:
model_names (list): list of model names to return the standardized dict
models_sources_map: a dict map between model names and sources as returned
by the utils function map_name_sources('models')
Returns:
A list of dictionaries with the JSON representation of each CV model | sotaai/rl/__init__.py | create_models_dict | stateoftheartai/sotaai | 23 | python | def create_models_dict(model_names, models_sources_map, import_library=False, log=False):
"Given a list of model names, return a list with the JSON representation\n of each model as an standardized dict\n Args:\n model_names (list): list of model names to return the standardized dict\n models_sources_map: a dict map between model names and sources as returned\n by the utils function map_name_sources('models')\n Returns:\n A list of dictionaries with the JSON representation of each CV model\n "
models = []
for (i, model_name) in enumerate(model_names):
if log:
print(' - ({}/{}) {}'.format((i + 1), len(model_names), ('models.' + model_name)))
model = load_model(name=model_name, import_library=import_library)
model_dict = model.to_dict()
model_dict['sources'] = models_sources_map[model_dict['name']]
del model_dict['source']
model_dict['implemented_sources'] = []
if ('garage' in model_dict['sources']):
model_dict['implemented_sources'] = ['garage']
model_dict['unified_name'] = model_name
models.append(model_dict)
return models | def create_models_dict(model_names, models_sources_map, import_library=False, log=False):
"Given a list of model names, return a list with the JSON representation\n of each model as an standardized dict\n Args:\n model_names (list): list of model names to return the standardized dict\n models_sources_map: a dict map between model names and sources as returned\n by the utils function map_name_sources('models')\n Returns:\n A list of dictionaries with the JSON representation of each CV model\n "
models = []
for (i, model_name) in enumerate(model_names):
if log:
print(' - ({}/{}) {}'.format((i + 1), len(model_names), ('models.' + model_name)))
model = load_model(name=model_name, import_library=import_library)
model_dict = model.to_dict()
model_dict['sources'] = models_sources_map[model_dict['name']]
del model_dict['source']
model_dict['implemented_sources'] = []
if ('garage' in model_dict['sources']):
model_dict['implemented_sources'] = ['garage']
model_dict['unified_name'] = model_name
models.append(model_dict)
return models<|docstring|>Given a list of model names, return a list with the JSON representation
of each model as an standardized dict
Args:
model_names (list): list of model names to return the standardized dict
models_sources_map: a dict map between model names and sources as returned
by the utils function map_name_sources('models')
Returns:
A list of dictionaries with the JSON representation of each CV model<|endoftext|> |
ee6cac9975b9e7a970397355dc8aa29d6bf93ec1ed688bd01cb2f9c66d3b805c | def create_datasets_dict(dataset_names, dataset_sources_map, import_library=False, log=False):
"Given a list of dataset names, return a list with the JSON representation\n of each dataset as an standardized dict\n\n Args:\n dataset_names (list): list of dataset names to\n return the standardized dict\n dataset\n dataset_sources_map: a dict map between dataset names and sources as\n returned by the utils function map_name_sources('datasets')\n\n Returns:\n A list of dictionaries with the JSON representation of each CV model\n "
datasets = []
for (i, dataset_name) in enumerate(dataset_names):
if log:
print(' - ({}/{}) {}'.format((i + 1), len(dataset_names), ('datasets.' + dataset_name)))
dataset = load_environment(name=dataset_name, import_library=import_library)
dataset_dict = dataset.to_dict()
dataset_dict['sources'] = dataset_sources_map[dataset_dict['name']]
del dataset_dict['source']
dataset_dict['implemented_sources'] = dataset_dict['sources']
del dataset_dict['metadata']
del dataset_dict['observation_space']
del dataset_dict['action_space']
dataset_dict['unified_name'] = dataset_name
datasets.append(dataset_dict)
return datasets | Given a list of dataset names, return a list with the JSON representation
of each dataset as an standardized dict
Args:
dataset_names (list): list of dataset names to
return the standardized dict
dataset
dataset_sources_map: a dict map between dataset names and sources as
returned by the utils function map_name_sources('datasets')
Returns:
A list of dictionaries with the JSON representation of each CV model | sotaai/rl/__init__.py | create_datasets_dict | stateoftheartai/sotaai | 23 | python | def create_datasets_dict(dataset_names, dataset_sources_map, import_library=False, log=False):
"Given a list of dataset names, return a list with the JSON representation\n of each dataset as an standardized dict\n\n Args:\n dataset_names (list): list of dataset names to\n return the standardized dict\n dataset\n dataset_sources_map: a dict map between dataset names and sources as\n returned by the utils function map_name_sources('datasets')\n\n Returns:\n A list of dictionaries with the JSON representation of each CV model\n "
datasets = []
for (i, dataset_name) in enumerate(dataset_names):
if log:
print(' - ({}/{}) {}'.format((i + 1), len(dataset_names), ('datasets.' + dataset_name)))
dataset = load_environment(name=dataset_name, import_library=import_library)
dataset_dict = dataset.to_dict()
dataset_dict['sources'] = dataset_sources_map[dataset_dict['name']]
del dataset_dict['source']
dataset_dict['implemented_sources'] = dataset_dict['sources']
del dataset_dict['metadata']
del dataset_dict['observation_space']
del dataset_dict['action_space']
dataset_dict['unified_name'] = dataset_name
datasets.append(dataset_dict)
return datasets | def create_datasets_dict(dataset_names, dataset_sources_map, import_library=False, log=False):
"Given a list of dataset names, return a list with the JSON representation\n of each dataset as an standardized dict\n\n Args:\n dataset_names (list): list of dataset names to\n return the standardized dict\n dataset\n dataset_sources_map: a dict map between dataset names and sources as\n returned by the utils function map_name_sources('datasets')\n\n Returns:\n A list of dictionaries with the JSON representation of each CV model\n "
datasets = []
for (i, dataset_name) in enumerate(dataset_names):
if log:
print(' - ({}/{}) {}'.format((i + 1), len(dataset_names), ('datasets.' + dataset_name)))
dataset = load_environment(name=dataset_name, import_library=import_library)
dataset_dict = dataset.to_dict()
dataset_dict['sources'] = dataset_sources_map[dataset_dict['name']]
del dataset_dict['source']
dataset_dict['implemented_sources'] = dataset_dict['sources']
del dataset_dict['metadata']
del dataset_dict['observation_space']
del dataset_dict['action_space']
dataset_dict['unified_name'] = dataset_name
datasets.append(dataset_dict)
return datasets<|docstring|>Given a list of dataset names, return a list with the JSON representation
of each dataset as an standardized dict
Args:
dataset_names (list): list of dataset names to
return the standardized dict
dataset
dataset_sources_map: a dict map between dataset names and sources as
returned by the utils function map_name_sources('datasets')
Returns:
A list of dictionaries with the JSON representation of each CV model<|endoftext|> |
5cc52598fd1a32f8aba0d77b8888b43ac420eb06144c68971f3dabcb98fa83f8 | def __init__(self, reference_width=None, reference_height=None, lines=None):
'EngineTaskLineSetting - a model defined in Swagger'
self._reference_width = None
self._reference_height = None
self._lines = None
self.discriminator = None
if (reference_width is not None):
self.reference_width = reference_width
if (reference_height is not None):
self.reference_height = reference_height
if (lines is not None):
self.lines = lines | EngineTaskLineSetting - a model defined in Swagger | vtpl_api/models/engine_task_line_setting.py | __init__ | vtpl1/videonetics_api | 0 | python | def __init__(self, reference_width=None, reference_height=None, lines=None):
self._reference_width = None
self._reference_height = None
self._lines = None
self.discriminator = None
if (reference_width is not None):
self.reference_width = reference_width
if (reference_height is not None):
self.reference_height = reference_height
if (lines is not None):
self.lines = lines | def __init__(self, reference_width=None, reference_height=None, lines=None):
self._reference_width = None
self._reference_height = None
self._lines = None
self.discriminator = None
if (reference_width is not None):
self.reference_width = reference_width
if (reference_height is not None):
self.reference_height = reference_height
if (lines is not None):
self.lines = lines<|docstring|>EngineTaskLineSetting - a model defined in Swagger<|endoftext|> |
2112ae1aaf113b9439beecad18f80f9f9a53fe86729ffd09bb26cf77573878e6 | @property
def reference_width(self):
'Gets the reference_width of this EngineTaskLineSetting. # noqa: E501\n\n Reference width on which zones are relevent # noqa: E501\n\n :return: The reference_width of this EngineTaskLineSetting. # noqa: E501\n :rtype: int\n '
return self._reference_width | Gets the reference_width of this EngineTaskLineSetting. # noqa: E501
Reference width on which zones are relevent # noqa: E501
:return: The reference_width of this EngineTaskLineSetting. # noqa: E501
:rtype: int | vtpl_api/models/engine_task_line_setting.py | reference_width | vtpl1/videonetics_api | 0 | python | @property
def reference_width(self):
'Gets the reference_width of this EngineTaskLineSetting. # noqa: E501\n\n Reference width on which zones are relevent # noqa: E501\n\n :return: The reference_width of this EngineTaskLineSetting. # noqa: E501\n :rtype: int\n '
return self._reference_width | @property
def reference_width(self):
'Gets the reference_width of this EngineTaskLineSetting. # noqa: E501\n\n Reference width on which zones are relevent # noqa: E501\n\n :return: The reference_width of this EngineTaskLineSetting. # noqa: E501\n :rtype: int\n '
return self._reference_width<|docstring|>Gets the reference_width of this EngineTaskLineSetting. # noqa: E501
Reference width on which zones are relevent # noqa: E501
:return: The reference_width of this EngineTaskLineSetting. # noqa: E501
:rtype: int<|endoftext|> |
743e1229de97027e22a06c9d55bf769c9a4e2276c43c0b2f68e8454176939cf6 | @reference_width.setter
def reference_width(self, reference_width):
'Sets the reference_width of this EngineTaskLineSetting.\n\n Reference width on which zones are relevent # noqa: E501\n\n :param reference_width: The reference_width of this EngineTaskLineSetting. # noqa: E501\n :type: int\n '
self._reference_width = reference_width | Sets the reference_width of this EngineTaskLineSetting.
Reference width on which zones are relevent # noqa: E501
:param reference_width: The reference_width of this EngineTaskLineSetting. # noqa: E501
:type: int | vtpl_api/models/engine_task_line_setting.py | reference_width | vtpl1/videonetics_api | 0 | python | @reference_width.setter
def reference_width(self, reference_width):
'Sets the reference_width of this EngineTaskLineSetting.\n\n Reference width on which zones are relevent # noqa: E501\n\n :param reference_width: The reference_width of this EngineTaskLineSetting. # noqa: E501\n :type: int\n '
self._reference_width = reference_width | @reference_width.setter
def reference_width(self, reference_width):
'Sets the reference_width of this EngineTaskLineSetting.\n\n Reference width on which zones are relevent # noqa: E501\n\n :param reference_width: The reference_width of this EngineTaskLineSetting. # noqa: E501\n :type: int\n '
self._reference_width = reference_width<|docstring|>Sets the reference_width of this EngineTaskLineSetting.
Reference width on which zones are relevent # noqa: E501
:param reference_width: The reference_width of this EngineTaskLineSetting. # noqa: E501
:type: int<|endoftext|> |
834cb07c37ee45083fad6bbcc21455dfedcd785f760be4df0a2887831f3f3962 | @property
def reference_height(self):
'Gets the reference_height of this EngineTaskLineSetting. # noqa: E501\n\n Reference height on which zones are relevent # noqa: E501\n\n :return: The reference_height of this EngineTaskLineSetting. # noqa: E501\n :rtype: int\n '
return self._reference_height | Gets the reference_height of this EngineTaskLineSetting. # noqa: E501
Reference height on which zones are relevent # noqa: E501
:return: The reference_height of this EngineTaskLineSetting. # noqa: E501
:rtype: int | vtpl_api/models/engine_task_line_setting.py | reference_height | vtpl1/videonetics_api | 0 | python | @property
def reference_height(self):
'Gets the reference_height of this EngineTaskLineSetting. # noqa: E501\n\n Reference height on which zones are relevent # noqa: E501\n\n :return: The reference_height of this EngineTaskLineSetting. # noqa: E501\n :rtype: int\n '
return self._reference_height | @property
def reference_height(self):
'Gets the reference_height of this EngineTaskLineSetting. # noqa: E501\n\n Reference height on which zones are relevent # noqa: E501\n\n :return: The reference_height of this EngineTaskLineSetting. # noqa: E501\n :rtype: int\n '
return self._reference_height<|docstring|>Gets the reference_height of this EngineTaskLineSetting. # noqa: E501
Reference height on which zones are relevent # noqa: E501
:return: The reference_height of this EngineTaskLineSetting. # noqa: E501
:rtype: int<|endoftext|> |
8ae7873a61106e40159041c89c7fd5fcc99386e4dd9db0bbfc90c6379ac2f5da | @reference_height.setter
def reference_height(self, reference_height):
'Sets the reference_height of this EngineTaskLineSetting.\n\n Reference height on which zones are relevent # noqa: E501\n\n :param reference_height: The reference_height of this EngineTaskLineSetting. # noqa: E501\n :type: int\n '
self._reference_height = reference_height | Sets the reference_height of this EngineTaskLineSetting.
Reference height on which zones are relevent # noqa: E501
:param reference_height: The reference_height of this EngineTaskLineSetting. # noqa: E501
:type: int | vtpl_api/models/engine_task_line_setting.py | reference_height | vtpl1/videonetics_api | 0 | python | @reference_height.setter
def reference_height(self, reference_height):
'Sets the reference_height of this EngineTaskLineSetting.\n\n Reference height on which zones are relevent # noqa: E501\n\n :param reference_height: The reference_height of this EngineTaskLineSetting. # noqa: E501\n :type: int\n '
self._reference_height = reference_height | @reference_height.setter
def reference_height(self, reference_height):
'Sets the reference_height of this EngineTaskLineSetting.\n\n Reference height on which zones are relevent # noqa: E501\n\n :param reference_height: The reference_height of this EngineTaskLineSetting. # noqa: E501\n :type: int\n '
self._reference_height = reference_height<|docstring|>Sets the reference_height of this EngineTaskLineSetting.
Reference height on which zones are relevent # noqa: E501
:param reference_height: The reference_height of this EngineTaskLineSetting. # noqa: E501
:type: int<|endoftext|> |
5cb448cb0eb3b0415668593a7d03b76e445874bc728c9b52eff45a5f69f43e65 | @property
def lines(self):
'Gets the lines of this EngineTaskLineSetting. # noqa: E501\n\n\n :return: The lines of this EngineTaskLineSetting. # noqa: E501\n :rtype: list[Line]\n '
return self._lines | Gets the lines of this EngineTaskLineSetting. # noqa: E501
:return: The lines of this EngineTaskLineSetting. # noqa: E501
:rtype: list[Line] | vtpl_api/models/engine_task_line_setting.py | lines | vtpl1/videonetics_api | 0 | python | @property
def lines(self):
'Gets the lines of this EngineTaskLineSetting. # noqa: E501\n\n\n :return: The lines of this EngineTaskLineSetting. # noqa: E501\n :rtype: list[Line]\n '
return self._lines | @property
def lines(self):
'Gets the lines of this EngineTaskLineSetting. # noqa: E501\n\n\n :return: The lines of this EngineTaskLineSetting. # noqa: E501\n :rtype: list[Line]\n '
return self._lines<|docstring|>Gets the lines of this EngineTaskLineSetting. # noqa: E501
:return: The lines of this EngineTaskLineSetting. # noqa: E501
:rtype: list[Line]<|endoftext|> |
3c29f720480d9477d03463566071454848f59a6cdac10d728367599ac4853c54 | @lines.setter
def lines(self, lines):
'Sets the lines of this EngineTaskLineSetting.\n\n\n :param lines: The lines of this EngineTaskLineSetting. # noqa: E501\n :type: list[Line]\n '
self._lines = lines | Sets the lines of this EngineTaskLineSetting.
:param lines: The lines of this EngineTaskLineSetting. # noqa: E501
:type: list[Line] | vtpl_api/models/engine_task_line_setting.py | lines | vtpl1/videonetics_api | 0 | python | @lines.setter
def lines(self, lines):
'Sets the lines of this EngineTaskLineSetting.\n\n\n :param lines: The lines of this EngineTaskLineSetting. # noqa: E501\n :type: list[Line]\n '
self._lines = lines | @lines.setter
def lines(self, lines):
'Sets the lines of this EngineTaskLineSetting.\n\n\n :param lines: The lines of this EngineTaskLineSetting. # noqa: E501\n :type: list[Line]\n '
self._lines = lines<|docstring|>Sets the lines of this EngineTaskLineSetting.
:param lines: The lines of this EngineTaskLineSetting. # noqa: E501
:type: list[Line]<|endoftext|> |
a10332a31d632fe259f9430d0a83b869ebebc22a7880d2d0ba44abcec2edc1be | def to_dict(self):
'Returns the model properties as a dict'
result = {}
for (attr, _) in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
elif hasattr(value, 'to_dict'):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map((lambda item: ((item[0], item[1].to_dict()) if hasattr(item[1], 'to_dict') else item)), value.items()))
else:
result[attr] = value
if issubclass(EngineTaskLineSetting, dict):
for (key, value) in self.items():
result[key] = value
return result | Returns the model properties as a dict | vtpl_api/models/engine_task_line_setting.py | to_dict | vtpl1/videonetics_api | 0 | python | def to_dict(self):
result = {}
for (attr, _) in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
elif hasattr(value, 'to_dict'):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map((lambda item: ((item[0], item[1].to_dict()) if hasattr(item[1], 'to_dict') else item)), value.items()))
else:
result[attr] = value
if issubclass(EngineTaskLineSetting, dict):
for (key, value) in self.items():
result[key] = value
return result | def to_dict(self):
result = {}
for (attr, _) in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
elif hasattr(value, 'to_dict'):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map((lambda item: ((item[0], item[1].to_dict()) if hasattr(item[1], 'to_dict') else item)), value.items()))
else:
result[attr] = value
if issubclass(EngineTaskLineSetting, dict):
for (key, value) in self.items():
result[key] = value
return result<|docstring|>Returns the model properties as a dict<|endoftext|> |
cbb19eaa2fc8a113d9e32f924ef280a7e97563f8915f94f65dab438997af2e99 | def to_str(self):
'Returns the string representation of the model'
return pprint.pformat(self.to_dict()) | Returns the string representation of the model | vtpl_api/models/engine_task_line_setting.py | to_str | vtpl1/videonetics_api | 0 | python | def to_str(self):
return pprint.pformat(self.to_dict()) | def to_str(self):
return pprint.pformat(self.to_dict())<|docstring|>Returns the string representation of the model<|endoftext|> |
772243a2c2b3261a9b954d07aaf295e3c1242a579a495e2d6a5679c677861703 | def __repr__(self):
'For `print` and `pprint`'
return self.to_str() | For `print` and `pprint` | vtpl_api/models/engine_task_line_setting.py | __repr__ | vtpl1/videonetics_api | 0 | python | def __repr__(self):
return self.to_str() | def __repr__(self):
return self.to_str()<|docstring|>For `print` and `pprint`<|endoftext|> |
50ff622fa3465889f67e6fae7029dd4b90f8ebc0a47ae31e31609fd99d9d35c4 | def __eq__(self, other):
'Returns true if both objects are equal'
if (not isinstance(other, EngineTaskLineSetting)):
return False
return (self.__dict__ == other.__dict__) | Returns true if both objects are equal | vtpl_api/models/engine_task_line_setting.py | __eq__ | vtpl1/videonetics_api | 0 | python | def __eq__(self, other):
if (not isinstance(other, EngineTaskLineSetting)):
return False
return (self.__dict__ == other.__dict__) | def __eq__(self, other):
if (not isinstance(other, EngineTaskLineSetting)):
return False
return (self.__dict__ == other.__dict__)<|docstring|>Returns true if both objects are equal<|endoftext|> |
43dc6740163eb9fc1161d09cb2208a64c7ad0cc8d9c8637ac3264522d3ec7e42 | def __ne__(self, other):
'Returns true if both objects are not equal'
return (not (self == other)) | Returns true if both objects are not equal | vtpl_api/models/engine_task_line_setting.py | __ne__ | vtpl1/videonetics_api | 0 | python | def __ne__(self, other):
return (not (self == other)) | def __ne__(self, other):
return (not (self == other))<|docstring|>Returns true if both objects are not equal<|endoftext|> |
7ef6c62b97bba1c32aaa09a95500e8b8dcecacc7a85b4290481d861983d3b5f1 | def info(self, message, *args, **kwargs):
'Displays an info message to the default output stream\n\n :param str message: text to be displayed'
self._log.info(message, *args, **kwargs) | Displays an info message to the default output stream
:param str message: text to be displayed | src/friendlyshell/basic_logger_mixin.py | info | TheFriendlyCoder/FriendlyShell | 0 | python | def info(self, message, *args, **kwargs):
'Displays an info message to the default output stream\n\n :param str message: text to be displayed'
self._log.info(message, *args, **kwargs) | def info(self, message, *args, **kwargs):
'Displays an info message to the default output stream\n\n :param str message: text to be displayed'
self._log.info(message, *args, **kwargs)<|docstring|>Displays an info message to the default output stream
:param str message: text to be displayed<|endoftext|> |
0e41d6e2a1d037957183ca24022588405a4f8519f67e415e4f434b6b360c5b0c | def warning(self, message, *args, **kwargs):
'Displays a non-critical warning message to the default output stream\n\n :param str message: text to be displayed'
self._log.warning(message, *args, **kwargs) | Displays a non-critical warning message to the default output stream
:param str message: text to be displayed | src/friendlyshell/basic_logger_mixin.py | warning | TheFriendlyCoder/FriendlyShell | 0 | python | def warning(self, message, *args, **kwargs):
'Displays a non-critical warning message to the default output stream\n\n :param str message: text to be displayed'
self._log.warning(message, *args, **kwargs) | def warning(self, message, *args, **kwargs):
'Displays a non-critical warning message to the default output stream\n\n :param str message: text to be displayed'
self._log.warning(message, *args, **kwargs)<|docstring|>Displays a non-critical warning message to the default output stream
:param str message: text to be displayed<|endoftext|> |
c4c88d2997498b944c71e58563b06690753578db7820edce5052d8e82fb63f93 | def error(self, message, *args, **kwargs):
'Displays a critical error message to the default output stream\n\n :param str message: text to be displayed'
self._log.error(message, *args, **kwargs) | Displays a critical error message to the default output stream
:param str message: text to be displayed | src/friendlyshell/basic_logger_mixin.py | error | TheFriendlyCoder/FriendlyShell | 0 | python | def error(self, message, *args, **kwargs):
'Displays a critical error message to the default output stream\n\n :param str message: text to be displayed'
self._log.error(message, *args, **kwargs) | def error(self, message, *args, **kwargs):
'Displays a critical error message to the default output stream\n\n :param str message: text to be displayed'
self._log.error(message, *args, **kwargs)<|docstring|>Displays a critical error message to the default output stream
:param str message: text to be displayed<|endoftext|> |
6c311da386f50c27fbd4d7089091617d6b3aee36af98b1a677089f476158ef31 | def debug(self, message, *args, **kwargs):
'Displays an internal-use-only debug message to verbose log file\n\n :param str message: text to be displayed'
self._log.debug(message, *args, **kwargs) | Displays an internal-use-only debug message to verbose log file
:param str message: text to be displayed | src/friendlyshell/basic_logger_mixin.py | debug | TheFriendlyCoder/FriendlyShell | 0 | python | def debug(self, message, *args, **kwargs):
'Displays an internal-use-only debug message to verbose log file\n\n :param str message: text to be displayed'
self._log.debug(message, *args, **kwargs) | def debug(self, message, *args, **kwargs):
'Displays an internal-use-only debug message to verbose log file\n\n :param str message: text to be displayed'
self._log.debug(message, *args, **kwargs)<|docstring|>Displays an internal-use-only debug message to verbose log file
:param str message: text to be displayed<|endoftext|> |
35b36edd677dfd7f5ecb5cfe4d512f92d8a9f2a23bf51ff98c8db276f765764a | def read_filesys(etrans_save_fs, etrans_locs):
' get the lj params thar are saved currently in the filesystem\n '
(_, _) = (etrans_save_fs, etrans_locs)
(sigmas, epsilons, geoms) = ([], [], [])
return (sigmas, epsilons, geoms) | get the lj params thar are saved currently in the filesystem | mechroutines/trans/_routines/_gather.py | read_filesys | sjklipp/mechdriver | 0 | python | def read_filesys(etrans_save_fs, etrans_locs):
' \n '
(_, _) = (etrans_save_fs, etrans_locs)
(sigmas, epsilons, geoms) = ([], [], [])
return (sigmas, epsilons, geoms) | def read_filesys(etrans_save_fs, etrans_locs):
' \n '
(_, _) = (etrans_save_fs, etrans_locs)
(sigmas, epsilons, geoms) = ([], [], [])
return (sigmas, epsilons, geoms)<|docstring|>get the lj params thar are saved currently in the filesystem<|endoftext|> |
aba6779f17f4e24b9eb3daf7d69e022821cb4bf17fe3e15836680411b50bd9de | def read_output(run_path):
' get the lj params from each run and average them together\n '
(all_sigmas, all_epsilons) = ([], [])
for jobdir in _jobdirs(run_path):
lj_str = _output_str(jobdir, 'lj.out')
geo_str = _output_str(jobdir, 'min_geoms.out')
(sigmas, epsilons) = onedmin_io.reader.lennard_jones(lj_str)
if ((sigmas is not None) and (epsilons is not None)):
all_sigmas.extend(sigmas)
all_epsilons.extend(epsilons)
geoms = geo_str.split()
return (sigmas, epsilons, geoms) | get the lj params from each run and average them together | mechroutines/trans/_routines/_gather.py | read_output | sjklipp/mechdriver | 0 | python | def read_output(run_path):
' \n '
(all_sigmas, all_epsilons) = ([], [])
for jobdir in _jobdirs(run_path):
lj_str = _output_str(jobdir, 'lj.out')
geo_str = _output_str(jobdir, 'min_geoms.out')
(sigmas, epsilons) = onedmin_io.reader.lennard_jones(lj_str)
if ((sigmas is not None) and (epsilons is not None)):
all_sigmas.extend(sigmas)
all_epsilons.extend(epsilons)
geoms = geo_str.split()
return (sigmas, epsilons, geoms) | def read_output(run_path):
' \n '
(all_sigmas, all_epsilons) = ([], [])
for jobdir in _jobdirs(run_path):
lj_str = _output_str(jobdir, 'lj.out')
geo_str = _output_str(jobdir, 'min_geoms.out')
(sigmas, epsilons) = onedmin_io.reader.lennard_jones(lj_str)
if ((sigmas is not None) and (epsilons is not None)):
all_sigmas.extend(sigmas)
all_epsilons.extend(epsilons)
geoms = geo_str.split()
return (sigmas, epsilons, geoms)<|docstring|>get the lj params from each run and average them together<|endoftext|> |
ee123c3e6b744fa308cf311cc2f00833b2f9c0daf9ab86d3005b56e5bf80eab4 | def prog_version(run_path):
' read the program and version\n '
for jobdir in _jobdirs(run_path):
lj_str = _output_str(jobdir, 'lj.out')
break
version = onedmin_io.reader.program_version(lj_str)
return version | read the program and version | mechroutines/trans/_routines/_gather.py | prog_version | sjklipp/mechdriver | 0 | python | def prog_version(run_path):
' \n '
for jobdir in _jobdirs(run_path):
lj_str = _output_str(jobdir, 'lj.out')
break
version = onedmin_io.reader.program_version(lj_str)
return version | def prog_version(run_path):
' \n '
for jobdir in _jobdirs(run_path):
lj_str = _output_str(jobdir, 'lj.out')
break
version = onedmin_io.reader.program_version(lj_str)
return version<|docstring|>read the program and version<|endoftext|> |
2c8467f9ae52a93256d35036ac777d7b433fb6c7b4eed415b4b5d7706080a2df | def _jobdirs(run_path):
' Obtain a list of all the directory names where OneDMin\n jobs were run\n '
return [os.path.join(run_path, directory) for directory in os.listdir(run_path) if (('build' not in directory) and ('yaml' not in directory))] | Obtain a list of all the directory names where OneDMin
jobs were run | mechroutines/trans/_routines/_gather.py | _jobdirs | sjklipp/mechdriver | 0 | python | def _jobdirs(run_path):
' Obtain a list of all the directory names where OneDMin\n jobs were run\n '
return [os.path.join(run_path, directory) for directory in os.listdir(run_path) if (('build' not in directory) and ('yaml' not in directory))] | def _jobdirs(run_path):
' Obtain a list of all the directory names where OneDMin\n jobs were run\n '
return [os.path.join(run_path, directory) for directory in os.listdir(run_path) if (('build' not in directory) and ('yaml' not in directory))]<|docstring|>Obtain a list of all the directory names where OneDMin
jobs were run<|endoftext|> |
fc12345abc2a31899d909a8acba2d5300d76e9c4de5ebd71a87ff0d2d5403a68 | def _output_str(jobdir, output_name):
' Read the output file\n '
output_file_name = os.path.join(jobdir, output_name)
if os.path.exists(output_file_name):
with open(output_file_name, 'r') as outfile:
out_str = outfile.read()
else:
out_str = None
return out_str | Read the output file | mechroutines/trans/_routines/_gather.py | _output_str | sjklipp/mechdriver | 0 | python | def _output_str(jobdir, output_name):
' \n '
output_file_name = os.path.join(jobdir, output_name)
if os.path.exists(output_file_name):
with open(output_file_name, 'r') as outfile:
out_str = outfile.read()
else:
out_str = None
return out_str | def _output_str(jobdir, output_name):
' \n '
output_file_name = os.path.join(jobdir, output_name)
if os.path.exists(output_file_name):
with open(output_file_name, 'r') as outfile:
out_str = outfile.read()
else:
out_str = None
return out_str<|docstring|>Read the output file<|endoftext|> |
73a9cbaeaabed71119524abf3217fb28e5f6e5c9a6a9bee6a9fe2dc44a04c093 | def print_lj_parms(sigmas, epsilons):
' Print the lj parameters out\n '
if (sigmas and epsilons):
ioprinter.info_message('{0:<14s}{1:<16s}'.format('\nSigma (Ang)', 'Epsilon (cm-1)'))
for (sig, eps) in zip(sigmas, epsilons):
ioprinter.info_message('{0:<14.4f}{1:<16.4f}'.format(sig, eps)) | Print the lj parameters out | mechroutines/trans/_routines/_gather.py | print_lj_parms | sjklipp/mechdriver | 0 | python | def print_lj_parms(sigmas, epsilons):
' \n '
if (sigmas and epsilons):
ioprinter.info_message('{0:<14s}{1:<16s}'.format('\nSigma (Ang)', 'Epsilon (cm-1)'))
for (sig, eps) in zip(sigmas, epsilons):
ioprinter.info_message('{0:<14.4f}{1:<16.4f}'.format(sig, eps)) | def print_lj_parms(sigmas, epsilons):
' \n '
if (sigmas and epsilons):
ioprinter.info_message('{0:<14s}{1:<16s}'.format('\nSigma (Ang)', 'Epsilon (cm-1)'))
for (sig, eps) in zip(sigmas, epsilons):
ioprinter.info_message('{0:<14.4f}{1:<16.4f}'.format(sig, eps))<|docstring|>Print the lj parameters out<|endoftext|> |
feae4ad7afb714c55c89b0ccacd90c2bd5682602982c23f87e3daec93c3e6620 | def copy_and_update(dictionary, update):
'Returns an updated copy of the dictionary without modifying the original'
newdict = dictionary.copy()
newdict.update(update)
return newdict | Returns an updated copy of the dictionary without modifying the original | coyote_framework/webdriver/webdriver/browsercapabilities.py | copy_and_update | vaibhavrastogi1988/python_testing_framework | 0 | python | def copy_and_update(dictionary, update):
newdict = dictionary.copy()
newdict.update(update)
return newdict | def copy_and_update(dictionary, update):
newdict = dictionary.copy()
newdict.update(update)
return newdict<|docstring|>Returns an updated copy of the dictionary without modifying the original<|endoftext|> |
35ac84ec71300842b032a390cb390c3a1d4d75a9da89a33bdf40f34c3fcf89f9 | def add(self, settings):
'Alias for update to allow chaining'
self.update(settings)
return self | Alias for update to allow chaining | coyote_framework/webdriver/webdriver/browsercapabilities.py | add | vaibhavrastogi1988/python_testing_framework | 0 | python | def add(self, settings):
self.update(settings)
return self | def add(self, settings):
self.update(settings)
return self<|docstring|>Alias for update to allow chaining<|endoftext|> |
5d247da6b002b8e0c2965eed679de28fdf2feb49b59cd004d2a9e4c09752b716 | def do_ndt_test(country_code=''):
'Runs the NDT test as a subprocess and returns the raw results.\n\n Args:\n `country_code`: A capitalized, two-letter country code representing the\n location of the desired test server. If no country code is supplied,\n the script uses the default mlab_ns behavior.\n Returns:\n The STDOUT of the call to `measurement_kit`.\n '
now = int(subprocess.check_output(['date', '-u', '+%s']))
if (country_code == ''):
result_raw = subprocess.check_output(['/test-runner/measurement_kit', '-g', ('--reportfile=/data/ndt-%d.njson' % now), 'ndt'])
else:
result_raw = subprocess.check_output(['/test-runner/measurement_kit', '-g', ('--reportfile=/data/ndt-%d.njson' % now), 'ndt', '-C', country_code])
return result_raw | Runs the NDT test as a subprocess and returns the raw results.
Args:
`country_code`: A capitalized, two-letter country code representing the
location of the desired test server. If no country code is supplied,
the script uses the default mlab_ns behavior.
Returns:
The STDOUT of the call to `measurement_kit`. | run.py | do_ndt_test | stephen-soltesz/murakami | 0 | python | def do_ndt_test(country_code=):
'Runs the NDT test as a subprocess and returns the raw results.\n\n Args:\n `country_code`: A capitalized, two-letter country code representing the\n location of the desired test server. If no country code is supplied,\n the script uses the default mlab_ns behavior.\n Returns:\n The STDOUT of the call to `measurement_kit`.\n '
now = int(subprocess.check_output(['date', '-u', '+%s']))
if (country_code == ):
result_raw = subprocess.check_output(['/test-runner/measurement_kit', '-g', ('--reportfile=/data/ndt-%d.njson' % now), 'ndt'])
else:
result_raw = subprocess.check_output(['/test-runner/measurement_kit', '-g', ('--reportfile=/data/ndt-%d.njson' % now), 'ndt', '-C', country_code])
return result_raw | def do_ndt_test(country_code=):
'Runs the NDT test as a subprocess and returns the raw results.\n\n Args:\n `country_code`: A capitalized, two-letter country code representing the\n location of the desired test server. If no country code is supplied,\n the script uses the default mlab_ns behavior.\n Returns:\n The STDOUT of the call to `measurement_kit`.\n '
now = int(subprocess.check_output(['date', '-u', '+%s']))
if (country_code == ):
result_raw = subprocess.check_output(['/test-runner/measurement_kit', '-g', ('--reportfile=/data/ndt-%d.njson' % now), 'ndt'])
else:
result_raw = subprocess.check_output(['/test-runner/measurement_kit', '-g', ('--reportfile=/data/ndt-%d.njson' % now), 'ndt', '-C', country_code])
return result_raw<|docstring|>Runs the NDT test as a subprocess and returns the raw results.
Args:
`country_code`: A capitalized, two-letter country code representing the
location of the desired test server. If no country code is supplied,
the script uses the default mlab_ns behavior.
Returns:
The STDOUT of the call to `measurement_kit`.<|endoftext|> |
27b06d5c7570eb14a8d720801f2e587a293582aada271de8fb9314a2a2bcd628 | def summarize_tests():
'Converts measurement_kit .njson test results into a single .csv file.\n\n This function checks the `/data/` directory for all files, reads the json\n into an object and writes the object into a csv file that it stores in\n `/share/history.csv` (the `share` directory is shared between this Docker\n image and the dashboard image).\n '
with tempfile.NamedTemporaryFile(delete=False) as tmpfile:
historywriter = csv.writer(tmpfile)
historywriter.writerow(['Datetime', 'Download', 'Upload'])
for file in sorted(os.listdir('/data')):
with open(('/data/' + file)) as json_data:
try:
d = json.load(json_data)
historywriter.writerow([d['measurement_start_time'], d['test_keys']['simple']['download'], d['test_keys']['simple']['upload']])
except Exception as e:
logging.error('Failed to write row %s', e)
pass
tmp_loc = tmpfile.name
logging.info('Updating /share/history.csv')
shutil.move(tmp_loc, '/share/history.csv') | Converts measurement_kit .njson test results into a single .csv file.
This function checks the `/data/` directory for all files, reads the json
into an object and writes the object into a csv file that it stores in
`/share/history.csv` (the `share` directory is shared between this Docker
image and the dashboard image). | run.py | summarize_tests | stephen-soltesz/murakami | 0 | python | def summarize_tests():
'Converts measurement_kit .njson test results into a single .csv file.\n\n This function checks the `/data/` directory for all files, reads the json\n into an object and writes the object into a csv file that it stores in\n `/share/history.csv` (the `share` directory is shared between this Docker\n image and the dashboard image).\n '
with tempfile.NamedTemporaryFile(delete=False) as tmpfile:
historywriter = csv.writer(tmpfile)
historywriter.writerow(['Datetime', 'Download', 'Upload'])
for file in sorted(os.listdir('/data')):
with open(('/data/' + file)) as json_data:
try:
d = json.load(json_data)
historywriter.writerow([d['measurement_start_time'], d['test_keys']['simple']['download'], d['test_keys']['simple']['upload']])
except Exception as e:
logging.error('Failed to write row %s', e)
pass
tmp_loc = tmpfile.name
logging.info('Updating /share/history.csv')
shutil.move(tmp_loc, '/share/history.csv') | def summarize_tests():
'Converts measurement_kit .njson test results into a single .csv file.\n\n This function checks the `/data/` directory for all files, reads the json\n into an object and writes the object into a csv file that it stores in\n `/share/history.csv` (the `share` directory is shared between this Docker\n image and the dashboard image).\n '
with tempfile.NamedTemporaryFile(delete=False) as tmpfile:
historywriter = csv.writer(tmpfile)
historywriter.writerow(['Datetime', 'Download', 'Upload'])
for file in sorted(os.listdir('/data')):
with open(('/data/' + file)) as json_data:
try:
d = json.load(json_data)
historywriter.writerow([d['measurement_start_time'], d['test_keys']['simple']['download'], d['test_keys']['simple']['upload']])
except Exception as e:
logging.error('Failed to write row %s', e)
pass
tmp_loc = tmpfile.name
logging.info('Updating /share/history.csv')
shutil.move(tmp_loc, '/share/history.csv')<|docstring|>Converts measurement_kit .njson test results into a single .csv file.
This function checks the `/data/` directory for all files, reads the json
into an object and writes the object into a csv file that it stores in
`/share/history.csv` (the `share` directory is shared between this Docker
image and the dashboard image).<|endoftext|> |
97d1121b1e7c153234a2b8398280e31f5c3965fb1af611eaab4f36dd875eee39 | def perform_test_loop(expected_sleep_secs=((12 * 60) * 60)):
"The main loop of the script.\n\n It gathers the computer's location, then loops forever calling\n measurement_kit once with the default mlab_ns behavior. It then sleeps\n for a random interval (determined by an exponential distribution) that\n will average out to expected_sleep_seconds.\n\n Args:\n `expected_sleep_seconds`: The desired average time, in seconds,\n between tests.\n "
while True:
try:
_ = do_ndt_test('')
except subprocess.CalledProcessError as ex:
logging.error('Error in NDT test: %s', ex)
summarize_tests()
sleeptime = random.expovariate((1.0 / expected_sleep_secs))
resume_time = (datetime.datetime.utcnow() + datetime.timedelta(seconds=sleeptime))
logging.info('Sleeping for %u seconds (until %s)', sleeptime, resume_time)
time.sleep(sleeptime) | The main loop of the script.
It gathers the computer's location, then loops forever calling
measurement_kit once with the default mlab_ns behavior. It then sleeps
for a random interval (determined by an exponential distribution) that
will average out to expected_sleep_seconds.
Args:
`expected_sleep_seconds`: The desired average time, in seconds,
between tests. | run.py | perform_test_loop | stephen-soltesz/murakami | 0 | python | def perform_test_loop(expected_sleep_secs=((12 * 60) * 60)):
"The main loop of the script.\n\n It gathers the computer's location, then loops forever calling\n measurement_kit once with the default mlab_ns behavior. It then sleeps\n for a random interval (determined by an exponential distribution) that\n will average out to expected_sleep_seconds.\n\n Args:\n `expected_sleep_seconds`: The desired average time, in seconds,\n between tests.\n "
while True:
try:
_ = do_ndt_test()
except subprocess.CalledProcessError as ex:
logging.error('Error in NDT test: %s', ex)
summarize_tests()
sleeptime = random.expovariate((1.0 / expected_sleep_secs))
resume_time = (datetime.datetime.utcnow() + datetime.timedelta(seconds=sleeptime))
logging.info('Sleeping for %u seconds (until %s)', sleeptime, resume_time)
time.sleep(sleeptime) | def perform_test_loop(expected_sleep_secs=((12 * 60) * 60)):
"The main loop of the script.\n\n It gathers the computer's location, then loops forever calling\n measurement_kit once with the default mlab_ns behavior. It then sleeps\n for a random interval (determined by an exponential distribution) that\n will average out to expected_sleep_seconds.\n\n Args:\n `expected_sleep_seconds`: The desired average time, in seconds,\n between tests.\n "
while True:
try:
_ = do_ndt_test()
except subprocess.CalledProcessError as ex:
logging.error('Error in NDT test: %s', ex)
summarize_tests()
sleeptime = random.expovariate((1.0 / expected_sleep_secs))
resume_time = (datetime.datetime.utcnow() + datetime.timedelta(seconds=sleeptime))
logging.info('Sleeping for %u seconds (until %s)', sleeptime, resume_time)
time.sleep(sleeptime)<|docstring|>The main loop of the script.
It gathers the computer's location, then loops forever calling
measurement_kit once with the default mlab_ns behavior. It then sleeps
for a random interval (determined by an exponential distribution) that
will average out to expected_sleep_seconds.
Args:
`expected_sleep_seconds`: The desired average time, in seconds,
between tests.<|endoftext|> |
0e1902fd9e835e7033251e308e010e47b57ffa6be680ba725e05e70e61b731d9 | def test_unknown_file(err, filename):
'Tests some sketchy files that require silly code.'
path = filename.split('/')
name = path.pop()
if (name == 'chromelist.txt'):
err.notice(('testcases_packagelayout', 'test_unknown_file', 'deprecated_file'), 'Extension contains a deprecated file', ('The file "%s" is no longer supported by any modern Mozilla product.' % filename), filename)
return True | Tests some sketchy files that require silly code. | validator/testcases/packagelayout.py | test_unknown_file | kmaglione/amo-validator | 1 | python | def test_unknown_file(err, filename):
path = filename.split('/')
name = path.pop()
if (name == 'chromelist.txt'):
err.notice(('testcases_packagelayout', 'test_unknown_file', 'deprecated_file'), 'Extension contains a deprecated file', ('The file "%s" is no longer supported by any modern Mozilla product.' % filename), filename)
return True | def test_unknown_file(err, filename):
path = filename.split('/')
name = path.pop()
if (name == 'chromelist.txt'):
err.notice(('testcases_packagelayout', 'test_unknown_file', 'deprecated_file'), 'Extension contains a deprecated file', ('The file "%s" is no longer supported by any modern Mozilla product.' % filename), filename)
return True<|docstring|>Tests some sketchy files that require silly code.<|endoftext|> |
a06899c2ee4b19787270d0b485d42a4f432344327fa2835d16b2726e2a0e4e9c | @decorator.register_test(tier=1)
def test_blacklisted_files(err, xpi_package=None):
'Detects blacklisted files and extensions.'
flagged_files = []
HELP = ('Please try to avoid interacting with or bundling native binaries whenever possible. If you are bundling binaries for performance reasons, please consider alternatives such as Emscripten (http://mzl.la/1KrSUh2), JavaScript typed arrays (http://mzl.la/1Iw02sr), and Worker threads (http://mzl.la/1OGfAcc).', 'Any code which bundles native binaries must submit full source code, and undergo manual code review for at least one submission.')
for name in xpi_package:
file_ = xpi_package.info(name)
extension = file_['extension']
if (extension in blacklisted_extensions):
err.metadata['contains_binary_extension'] = True
flagged_files.append(name)
continue
try:
zip = xpi_package.zf.open(name)
except BadZipfile:
err.error(('testcases_content', 'test_packed_packages', 'jar_subpackage_corrupt'), 'Package corrupt', 'The package appears to be corrupt.', file=xpi_package.filename)
break
bytes = tuple([ord(x) for x in zip.read(4)])
if [x for x in blacklisted_magic_numbers if (bytes[0:len(x)] == x)]:
err.metadata['contains_binary_content'] = True
err.warning(err_id=('testcases_packagelayout', 'test_blacklisted_files', 'disallowed_file_type'), warning='Flagged file type found', description='A file was found to contain flagged content (i.e.: executable data, potentially unauthorized scripts, etc.).', editors_only=True, signing_help=HELP, signing_severity='high', filename=name)
if flagged_files:
if (sum((1 for f in flagged_files if f.endswith('.class'))) > JAVA_JAR_THRESHOLD):
err.warning(err_id=('testcases_packagelayout', 'test_blacklisted_files', 'java_jar'), warning='Java JAR file detected.', description='A Java JAR file was detected in the add-on.', filename=xpi_package.filename, signing_help=HELP, signing_severity='high')
else:
err.warning(err_id=('testcases_packagelayout', 'test_blacklisted_files', 'disallowed_extension'), warning='Flagged file extensions found.', description=('Files whose names end with flagged extensions have been found in the add-on.', 'The extension of these files are flagged because they usually identify binary components. Please see http://addons.mozilla.org/developers/docs/policies/reviews#section-binary for more information on the binary content review process.', '\n'.join(flagged_files)), editors_only=True, signing_help=HELP, signing_severity='medium', filename=name) | Detects blacklisted files and extensions. | validator/testcases/packagelayout.py | test_blacklisted_files | kmaglione/amo-validator | 1 | python | @decorator.register_test(tier=1)
def test_blacklisted_files(err, xpi_package=None):
flagged_files = []
HELP = ('Please try to avoid interacting with or bundling native binaries whenever possible. If you are bundling binaries for performance reasons, please consider alternatives such as Emscripten (http://mzl.la/1KrSUh2), JavaScript typed arrays (http://mzl.la/1Iw02sr), and Worker threads (http://mzl.la/1OGfAcc).', 'Any code which bundles native binaries must submit full source code, and undergo manual code review for at least one submission.')
for name in xpi_package:
file_ = xpi_package.info(name)
extension = file_['extension']
if (extension in blacklisted_extensions):
err.metadata['contains_binary_extension'] = True
flagged_files.append(name)
continue
try:
zip = xpi_package.zf.open(name)
except BadZipfile:
err.error(('testcases_content', 'test_packed_packages', 'jar_subpackage_corrupt'), 'Package corrupt', 'The package appears to be corrupt.', file=xpi_package.filename)
break
bytes = tuple([ord(x) for x in zip.read(4)])
if [x for x in blacklisted_magic_numbers if (bytes[0:len(x)] == x)]:
err.metadata['contains_binary_content'] = True
err.warning(err_id=('testcases_packagelayout', 'test_blacklisted_files', 'disallowed_file_type'), warning='Flagged file type found', description='A file was found to contain flagged content (i.e.: executable data, potentially unauthorized scripts, etc.).', editors_only=True, signing_help=HELP, signing_severity='high', filename=name)
if flagged_files:
if (sum((1 for f in flagged_files if f.endswith('.class'))) > JAVA_JAR_THRESHOLD):
err.warning(err_id=('testcases_packagelayout', 'test_blacklisted_files', 'java_jar'), warning='Java JAR file detected.', description='A Java JAR file was detected in the add-on.', filename=xpi_package.filename, signing_help=HELP, signing_severity='high')
else:
err.warning(err_id=('testcases_packagelayout', 'test_blacklisted_files', 'disallowed_extension'), warning='Flagged file extensions found.', description=('Files whose names end with flagged extensions have been found in the add-on.', 'The extension of these files are flagged because they usually identify binary components. Please see http://addons.mozilla.org/developers/docs/policies/reviews#section-binary for more information on the binary content review process.', '\n'.join(flagged_files)), editors_only=True, signing_help=HELP, signing_severity='medium', filename=name) | @decorator.register_test(tier=1)
def test_blacklisted_files(err, xpi_package=None):
flagged_files = []
HELP = ('Please try to avoid interacting with or bundling native binaries whenever possible. If you are bundling binaries for performance reasons, please consider alternatives such as Emscripten (http://mzl.la/1KrSUh2), JavaScript typed arrays (http://mzl.la/1Iw02sr), and Worker threads (http://mzl.la/1OGfAcc).', 'Any code which bundles native binaries must submit full source code, and undergo manual code review for at least one submission.')
for name in xpi_package:
file_ = xpi_package.info(name)
extension = file_['extension']
if (extension in blacklisted_extensions):
err.metadata['contains_binary_extension'] = True
flagged_files.append(name)
continue
try:
zip = xpi_package.zf.open(name)
except BadZipfile:
err.error(('testcases_content', 'test_packed_packages', 'jar_subpackage_corrupt'), 'Package corrupt', 'The package appears to be corrupt.', file=xpi_package.filename)
break
bytes = tuple([ord(x) for x in zip.read(4)])
if [x for x in blacklisted_magic_numbers if (bytes[0:len(x)] == x)]:
err.metadata['contains_binary_content'] = True
err.warning(err_id=('testcases_packagelayout', 'test_blacklisted_files', 'disallowed_file_type'), warning='Flagged file type found', description='A file was found to contain flagged content (i.e.: executable data, potentially unauthorized scripts, etc.).', editors_only=True, signing_help=HELP, signing_severity='high', filename=name)
if flagged_files:
if (sum((1 for f in flagged_files if f.endswith('.class'))) > JAVA_JAR_THRESHOLD):
err.warning(err_id=('testcases_packagelayout', 'test_blacklisted_files', 'java_jar'), warning='Java JAR file detected.', description='A Java JAR file was detected in the add-on.', filename=xpi_package.filename, signing_help=HELP, signing_severity='high')
else:
err.warning(err_id=('testcases_packagelayout', 'test_blacklisted_files', 'disallowed_extension'), warning='Flagged file extensions found.', description=('Files whose names end with flagged extensions have been found in the add-on.', 'The extension of these files are flagged because they usually identify binary components. Please see http://addons.mozilla.org/developers/docs/policies/reviews#section-binary for more information on the binary content review process.', '\n'.join(flagged_files)), editors_only=True, signing_help=HELP, signing_severity='medium', filename=name)<|docstring|>Detects blacklisted files and extensions.<|endoftext|> |
6a2ab48adc17703eb6e46c368551d252335f86d35a9db333a226f2ac45349184 | @decorator.register_test(tier=1)
def test_godlikea(err, xpi_package):
'Test to make sure that the godlikea namespace is not in use.'
if ('chrome/godlikea.jar' in xpi_package):
err.error(err_id=('testcases_packagelayout', 'test_godlikea'), error="Banned 'godlikea' chrome namespace", description="The 'godlikea' chrome namepsace is generated from a template and should be replaced with something unique to your add-on to avoid name conflicts.", filename='chrome/godlikea.jar') | Test to make sure that the godlikea namespace is not in use. | validator/testcases/packagelayout.py | test_godlikea | kmaglione/amo-validator | 1 | python | @decorator.register_test(tier=1)
def test_godlikea(err, xpi_package):
if ('chrome/godlikea.jar' in xpi_package):
err.error(err_id=('testcases_packagelayout', 'test_godlikea'), error="Banned 'godlikea' chrome namespace", description="The 'godlikea' chrome namepsace is generated from a template and should be replaced with something unique to your add-on to avoid name conflicts.", filename='chrome/godlikea.jar') | @decorator.register_test(tier=1)
def test_godlikea(err, xpi_package):
if ('chrome/godlikea.jar' in xpi_package):
err.error(err_id=('testcases_packagelayout', 'test_godlikea'), error="Banned 'godlikea' chrome namespace", description="The 'godlikea' chrome namepsace is generated from a template and should be replaced with something unique to your add-on to avoid name conflicts.", filename='chrome/godlikea.jar')<|docstring|>Test to make sure that the godlikea namespace is not in use.<|endoftext|> |
a51494a8008302da3476f9c319f699c948ee571595cab6bb2bcd6faf554bc642 | @decorator.register_test(tier=5, versions={FIREFOX_GUID: version_range('firefox', FF4_MIN), TB_GUID: version_range('thunderbird', '3.3a4pre'), FENNEC_GUID: version_range('fennec', '4.0'), ANDROID_GUID: version_range('android', '11.0a1')})
def test_compatibility_binary(err, xpi_package):
'\n Flags only binary content as being incompatible with future app releases.\n '
description = 'Add-ons with binary components must have their compatibility manually adjusted. Please test your add-on against the new version before updating your maxVersion.'
chrome = err.get_resource('chrome.manifest')
if (not chrome):
return
for entry in chrome.entries:
if (entry['type'] == 'binary-component'):
err.metadata['binary_components'] = True
err.notice(err_id=('testcases_packagelayout', 'test_compatibility_binary', 'disallowed_file_type'), notice='Flagged file type found', description=(('A file (%s) was registered as a binary component. Binary files may not be submitted to AMO unless accompanied by source code.' % entry['args'][0]), description), filename=entry['args'][0], compatibility_type='error') | Flags only binary content as being incompatible with future app releases. | validator/testcases/packagelayout.py | test_compatibility_binary | kmaglione/amo-validator | 1 | python | @decorator.register_test(tier=5, versions={FIREFOX_GUID: version_range('firefox', FF4_MIN), TB_GUID: version_range('thunderbird', '3.3a4pre'), FENNEC_GUID: version_range('fennec', '4.0'), ANDROID_GUID: version_range('android', '11.0a1')})
def test_compatibility_binary(err, xpi_package):
'\n \n '
description = 'Add-ons with binary components must have their compatibility manually adjusted. Please test your add-on against the new version before updating your maxVersion.'
chrome = err.get_resource('chrome.manifest')
if (not chrome):
return
for entry in chrome.entries:
if (entry['type'] == 'binary-component'):
err.metadata['binary_components'] = True
err.notice(err_id=('testcases_packagelayout', 'test_compatibility_binary', 'disallowed_file_type'), notice='Flagged file type found', description=(('A file (%s) was registered as a binary component. Binary files may not be submitted to AMO unless accompanied by source code.' % entry['args'][0]), description), filename=entry['args'][0], compatibility_type='error') | @decorator.register_test(tier=5, versions={FIREFOX_GUID: version_range('firefox', FF4_MIN), TB_GUID: version_range('thunderbird', '3.3a4pre'), FENNEC_GUID: version_range('fennec', '4.0'), ANDROID_GUID: version_range('android', '11.0a1')})
def test_compatibility_binary(err, xpi_package):
'\n \n '
description = 'Add-ons with binary components must have their compatibility manually adjusted. Please test your add-on against the new version before updating your maxVersion.'
chrome = err.get_resource('chrome.manifest')
if (not chrome):
return
for entry in chrome.entries:
if (entry['type'] == 'binary-component'):
err.metadata['binary_components'] = True
err.notice(err_id=('testcases_packagelayout', 'test_compatibility_binary', 'disallowed_file_type'), notice='Flagged file type found', description=(('A file (%s) was registered as a binary component. Binary files may not be submitted to AMO unless accompanied by source code.' % entry['args'][0]), description), filename=entry['args'][0], compatibility_type='error')<|docstring|>Flags only binary content as being incompatible with future app releases.<|endoftext|> |
354d3fa3b0bec25a52baaca016157b632b415f592e75b835ade6c6371a14a76d | @decorator.register_test(tier=1)
def test_layout_all(err, xpi_package):
'Tests the well-formedness of extensions.'
if xpi_package.subpackage:
return
if ((not err.get_resource('has_install_rdf')) and (not err.get_resource('bad_install_rdf')) and (not err.get_resource('has_package_json'))):
err.error(('testcases_packagelayout', 'test_layout_all', 'missing_install_rdf'), 'Add-on missing install.rdf.', 'All add-ons require an install.rdf file.')
return
package_namelist = list(xpi_package.zf.namelist())
package_nameset = set(package_namelist)
if (len(package_namelist) != len(package_nameset)):
err.error(err_id=('testcases_packagelayout', 'test_layout_all', 'duplicate_entries'), error='Package contains duplicate entries', description='The package contains multiple entries with the same name. This practice has been banned. Try unzipping and re-zipping your add-on package and try again.') | Tests the well-formedness of extensions. | validator/testcases/packagelayout.py | test_layout_all | kmaglione/amo-validator | 1 | python | @decorator.register_test(tier=1)
def test_layout_all(err, xpi_package):
if xpi_package.subpackage:
return
if ((not err.get_resource('has_install_rdf')) and (not err.get_resource('bad_install_rdf')) and (not err.get_resource('has_package_json'))):
err.error(('testcases_packagelayout', 'test_layout_all', 'missing_install_rdf'), 'Add-on missing install.rdf.', 'All add-ons require an install.rdf file.')
return
package_namelist = list(xpi_package.zf.namelist())
package_nameset = set(package_namelist)
if (len(package_namelist) != len(package_nameset)):
err.error(err_id=('testcases_packagelayout', 'test_layout_all', 'duplicate_entries'), error='Package contains duplicate entries', description='The package contains multiple entries with the same name. This practice has been banned. Try unzipping and re-zipping your add-on package and try again.') | @decorator.register_test(tier=1)
def test_layout_all(err, xpi_package):
if xpi_package.subpackage:
return
if ((not err.get_resource('has_install_rdf')) and (not err.get_resource('bad_install_rdf')) and (not err.get_resource('has_package_json'))):
err.error(('testcases_packagelayout', 'test_layout_all', 'missing_install_rdf'), 'Add-on missing install.rdf.', 'All add-ons require an install.rdf file.')
return
package_namelist = list(xpi_package.zf.namelist())
package_nameset = set(package_namelist)
if (len(package_namelist) != len(package_nameset)):
err.error(err_id=('testcases_packagelayout', 'test_layout_all', 'duplicate_entries'), error='Package contains duplicate entries', description='The package contains multiple entries with the same name. This practice has been banned. Try unzipping and re-zipping your add-on package and try again.')<|docstring|>Tests the well-formedness of extensions.<|endoftext|> |
6a9f90bf713bf4f24c0613a1b0799a297928be587bae1a2b844b641e603700e5 | @decorator.register_test(tier=1, expected_type=3)
def test_dictionary_layout(err, xpi_package=None):
'Ensures that dictionary packages contain the necessary\n components and that there are no other extraneous files lying\n around.'
mandatory_files = ['install.rdf', 'dictionaries/*.aff', 'dictionaries/*.dic']
whitelisted_files = ['install.js', 'icon.png', 'icon64.png', 'dictionaries/*.aff', 'dictionaries/*.dic', 'chrome.manifest', 'chrome/*']
whitelisted_extensions = ('txt',)
test_layout(err, xpi_package, mandatory_files, whitelisted_files, whitelisted_extensions, 'dictionary') | Ensures that dictionary packages contain the necessary
components and that there are no other extraneous files lying
around. | validator/testcases/packagelayout.py | test_dictionary_layout | kmaglione/amo-validator | 1 | python | @decorator.register_test(tier=1, expected_type=3)
def test_dictionary_layout(err, xpi_package=None):
'Ensures that dictionary packages contain the necessary\n components and that there are no other extraneous files lying\n around.'
mandatory_files = ['install.rdf', 'dictionaries/*.aff', 'dictionaries/*.dic']
whitelisted_files = ['install.js', 'icon.png', 'icon64.png', 'dictionaries/*.aff', 'dictionaries/*.dic', 'chrome.manifest', 'chrome/*']
whitelisted_extensions = ('txt',)
test_layout(err, xpi_package, mandatory_files, whitelisted_files, whitelisted_extensions, 'dictionary') | @decorator.register_test(tier=1, expected_type=3)
def test_dictionary_layout(err, xpi_package=None):
'Ensures that dictionary packages contain the necessary\n components and that there are no other extraneous files lying\n around.'
mandatory_files = ['install.rdf', 'dictionaries/*.aff', 'dictionaries/*.dic']
whitelisted_files = ['install.js', 'icon.png', 'icon64.png', 'dictionaries/*.aff', 'dictionaries/*.dic', 'chrome.manifest', 'chrome/*']
whitelisted_extensions = ('txt',)
test_layout(err, xpi_package, mandatory_files, whitelisted_files, whitelisted_extensions, 'dictionary')<|docstring|>Ensures that dictionary packages contain the necessary
components and that there are no other extraneous files lying
around.<|endoftext|> |
653484737d56009d9e89d4a4c9e534844ea597ed37e4dcebd262752046d06562 | @decorator.register_test(tier=1, expected_type=4)
def test_langpack_layout(err, xpi_package=None):
'Ensures that language packs only contain exactly what they\n need and nothing more. Otherwise, somebody could sneak something\n sneaking into them.'
mandatory_files = ['install.rdf', 'chrome.manifest']
whitelisted_files = ['chrome/*.jar']
whitelisted_extensions = ('manifest', 'rdf', 'jar', 'dtd', 'properties', 'xhtml', 'css')
test_layout(err, xpi_package, mandatory_files, whitelisted_files, whitelisted_extensions, 'language pack') | Ensures that language packs only contain exactly what they
need and nothing more. Otherwise, somebody could sneak something
sneaking into them. | validator/testcases/packagelayout.py | test_langpack_layout | kmaglione/amo-validator | 1 | python | @decorator.register_test(tier=1, expected_type=4)
def test_langpack_layout(err, xpi_package=None):
'Ensures that language packs only contain exactly what they\n need and nothing more. Otherwise, somebody could sneak something\n sneaking into them.'
mandatory_files = ['install.rdf', 'chrome.manifest']
whitelisted_files = ['chrome/*.jar']
whitelisted_extensions = ('manifest', 'rdf', 'jar', 'dtd', 'properties', 'xhtml', 'css')
test_layout(err, xpi_package, mandatory_files, whitelisted_files, whitelisted_extensions, 'language pack') | @decorator.register_test(tier=1, expected_type=4)
def test_langpack_layout(err, xpi_package=None):
'Ensures that language packs only contain exactly what they\n need and nothing more. Otherwise, somebody could sneak something\n sneaking into them.'
mandatory_files = ['install.rdf', 'chrome.manifest']
whitelisted_files = ['chrome/*.jar']
whitelisted_extensions = ('manifest', 'rdf', 'jar', 'dtd', 'properties', 'xhtml', 'css')
test_layout(err, xpi_package, mandatory_files, whitelisted_files, whitelisted_extensions, 'language pack')<|docstring|>Ensures that language packs only contain exactly what they
need and nothing more. Otherwise, somebody could sneak something
sneaking into them.<|endoftext|> |
55d52b10530257a1d8de47f1739e7ff7e92c31060cf1925f928391e590cd3f57 | @decorator.register_test(tier=1, expected_type=2)
def test_theme_layout(err, xpi_package=None):
'Ensures that themes only contain exactly what they need and\n nothing more. Otherwise, somebody could sneak something sneaking\n into them.'
mandatory_files = ['install.rdf', 'chrome.manifest']
whitelisted_files = ['chrome/*.jar']
test_layout(err, xpi_package, mandatory_files, whitelisted_files, None, 'theme') | Ensures that themes only contain exactly what they need and
nothing more. Otherwise, somebody could sneak something sneaking
into them. | validator/testcases/packagelayout.py | test_theme_layout | kmaglione/amo-validator | 1 | python | @decorator.register_test(tier=1, expected_type=2)
def test_theme_layout(err, xpi_package=None):
'Ensures that themes only contain exactly what they need and\n nothing more. Otherwise, somebody could sneak something sneaking\n into them.'
mandatory_files = ['install.rdf', 'chrome.manifest']
whitelisted_files = ['chrome/*.jar']
test_layout(err, xpi_package, mandatory_files, whitelisted_files, None, 'theme') | @decorator.register_test(tier=1, expected_type=2)
def test_theme_layout(err, xpi_package=None):
'Ensures that themes only contain exactly what they need and\n nothing more. Otherwise, somebody could sneak something sneaking\n into them.'
mandatory_files = ['install.rdf', 'chrome.manifest']
whitelisted_files = ['chrome/*.jar']
test_layout(err, xpi_package, mandatory_files, whitelisted_files, None, 'theme')<|docstring|>Ensures that themes only contain exactly what they need and
nothing more. Otherwise, somebody could sneak something sneaking
into them.<|endoftext|> |
0ef35d80d6563f8dd5cdea7f69cc6d46372232a5c6e15fbe57874c90271e8501 | def test_layout(err, xpi, mandatory, whitelisted, white_extensions=None, pack_type='Unknown Addon'):
'Tests the layout of a package. Pass in the various types of files\n and their levels of requirement and this guy will figure out which\n files should and should not be in the package.'
for file_ in xpi:
if ((file_ == '.DS_Store') or file_.startswith('__MACOSX/')):
continue
mfile_list = [mf for mf in mandatory if fnmatch(file_, mf)]
if mfile_list:
mfile = mfile_list[0]
mandatory.remove(mfile)
continue
if any((fnmatch(file_, wlfile) for wlfile in whitelisted)):
continue
if file_.endswith('/'):
continue
if test_unknown_file(err, file_):
continue
if ((white_extensions is None) or (file_.split('.')[(- 1)] in white_extensions)):
continue
err.warning(('testcases_packagelayout', 'test_layout', 'unknown_file'), 'Unknown file found in add-on', ['Files have been detected that are not allowed within this type of add-on. Remove the file or use an alternative, supported file format instead.', ('Detected file: %s' % file_)], file_)
if mandatory:
err.warning(('testcases_packagelayout', 'test_layout', 'missing_required'), 'Required file missing', ['This add-on is missing required files. Consult the documentation for a full list of required files.', ("Add-ons of type '%s' require files: %s" % (pack_type, ', '.join(mandatory)))]) | Tests the layout of a package. Pass in the various types of files
and their levels of requirement and this guy will figure out which
files should and should not be in the package. | validator/testcases/packagelayout.py | test_layout | kmaglione/amo-validator | 1 | python | def test_layout(err, xpi, mandatory, whitelisted, white_extensions=None, pack_type='Unknown Addon'):
'Tests the layout of a package. Pass in the various types of files\n and their levels of requirement and this guy will figure out which\n files should and should not be in the package.'
for file_ in xpi:
if ((file_ == '.DS_Store') or file_.startswith('__MACOSX/')):
continue
mfile_list = [mf for mf in mandatory if fnmatch(file_, mf)]
if mfile_list:
mfile = mfile_list[0]
mandatory.remove(mfile)
continue
if any((fnmatch(file_, wlfile) for wlfile in whitelisted)):
continue
if file_.endswith('/'):
continue
if test_unknown_file(err, file_):
continue
if ((white_extensions is None) or (file_.split('.')[(- 1)] in white_extensions)):
continue
err.warning(('testcases_packagelayout', 'test_layout', 'unknown_file'), 'Unknown file found in add-on', ['Files have been detected that are not allowed within this type of add-on. Remove the file or use an alternative, supported file format instead.', ('Detected file: %s' % file_)], file_)
if mandatory:
err.warning(('testcases_packagelayout', 'test_layout', 'missing_required'), 'Required file missing', ['This add-on is missing required files. Consult the documentation for a full list of required files.', ("Add-ons of type '%s' require files: %s" % (pack_type, ', '.join(mandatory)))]) | def test_layout(err, xpi, mandatory, whitelisted, white_extensions=None, pack_type='Unknown Addon'):
'Tests the layout of a package. Pass in the various types of files\n and their levels of requirement and this guy will figure out which\n files should and should not be in the package.'
for file_ in xpi:
if ((file_ == '.DS_Store') or file_.startswith('__MACOSX/')):
continue
mfile_list = [mf for mf in mandatory if fnmatch(file_, mf)]
if mfile_list:
mfile = mfile_list[0]
mandatory.remove(mfile)
continue
if any((fnmatch(file_, wlfile) for wlfile in whitelisted)):
continue
if file_.endswith('/'):
continue
if test_unknown_file(err, file_):
continue
if ((white_extensions is None) or (file_.split('.')[(- 1)] in white_extensions)):
continue
err.warning(('testcases_packagelayout', 'test_layout', 'unknown_file'), 'Unknown file found in add-on', ['Files have been detected that are not allowed within this type of add-on. Remove the file or use an alternative, supported file format instead.', ('Detected file: %s' % file_)], file_)
if mandatory:
err.warning(('testcases_packagelayout', 'test_layout', 'missing_required'), 'Required file missing', ['This add-on is missing required files. Consult the documentation for a full list of required files.', ("Add-ons of type '%s' require files: %s" % (pack_type, ', '.join(mandatory)))])<|docstring|>Tests the layout of a package. Pass in the various types of files
and their levels of requirement and this guy will figure out which
files should and should not be in the package.<|endoftext|> |
77aa875d3ef077bb6017cf87e64056d24c132e7a4ecb93517ff100927c6d7305 | def test_encode_token(token):
' Token serializer encodes a JWT correctly. '
assert (token.count('.') == 2) | Token serializer encodes a JWT correctly. | services/app/src/tests/test_models.py | test_encode_token | chimailo/witti | 0 | python | def test_encode_token(token):
' '
assert (token.count('.') == 2) | def test_encode_token(token):
' '
assert (token.count('.') == 2)<|docstring|>Token serializer encodes a JWT correctly.<|endoftext|> |
47818fe64f1d0096346e4003ce1ae712236eca40cff3bca93aeba28a22747289 | def test_decode_token(token):
' Token decoder decodes a JWT correctly. '
payload = User.decode_auth_token(token)
user = User.find_by_id(payload.get('id'))
assert (isinstance(user, User) is True)
assert (user.email == '[email protected]') | Token decoder decodes a JWT correctly. | services/app/src/tests/test_models.py | test_decode_token | chimailo/witti | 0 | python | def test_decode_token(token):
' '
payload = User.decode_auth_token(token)
user = User.find_by_id(payload.get('id'))
assert (isinstance(user, User) is True)
assert (user.email == '[email protected]') | def test_decode_token(token):
' '
payload = User.decode_auth_token(token)
user = User.find_by_id(payload.get('id'))
assert (isinstance(user, User) is True)
assert (user.email == '[email protected]')<|docstring|>Token decoder decodes a JWT correctly.<|endoftext|> |
c41e64f5e995986e79557fd547d311cfc526f92a61839d210a8a3b602163e7ef | def test_decode_token_invalid(token):
" Token decoder returns 'Invalid token' when\n it's been tampered with."
payload = User.decode_auth_token(f'{token}1337')
assert (isinstance(payload, User) is False)
assert ('Invalid token' in payload) | Token decoder returns 'Invalid token' when
it's been tampered with. | services/app/src/tests/test_models.py | test_decode_token_invalid | chimailo/witti | 0 | python | def test_decode_token_invalid(token):
" Token decoder returns 'Invalid token' when\n it's been tampered with."
payload = User.decode_auth_token(f'{token}1337')
assert (isinstance(payload, User) is False)
assert ('Invalid token' in payload) | def test_decode_token_invalid(token):
" Token decoder returns 'Invalid token' when\n it's been tampered with."
payload = User.decode_auth_token(f'{token}1337')
assert (isinstance(payload, User) is False)
assert ('Invalid token' in payload)<|docstring|>Token decoder returns 'Invalid token' when
it's been tampered with.<|endoftext|> |
7670e9aa5c55bbd49393766e2ae91d776b65c0796639110c0eebd1e133c915f2 | @patch('aws_secrets.miscellaneous.kms.encrypt')
def test_encrypt(mock_encrypt):
'\n Should encrypt the raw value\n '
mock_encrypt.return_value = b'SecretData'
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': 'MyPlainText'}, cipher_text=None)
assert (entry.encrypt() == 'SecretData')
mock_encrypt.assert_called_once_with(session, 'MyPlainText', KEY_ARN) | Should encrypt the raw value | tests/config/providers/secretsmanager/test_entry.py | test_encrypt | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.encrypt')
def test_encrypt(mock_encrypt):
'\n \n '
mock_encrypt.return_value = b'SecretData'
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': 'MyPlainText'}, cipher_text=None)
assert (entry.encrypt() == 'SecretData')
mock_encrypt.assert_called_once_with(session, 'MyPlainText', KEY_ARN) | @patch('aws_secrets.miscellaneous.kms.encrypt')
def test_encrypt(mock_encrypt):
'\n \n '
mock_encrypt.return_value = b'SecretData'
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': 'MyPlainText'}, cipher_text=None)
assert (entry.encrypt() == 'SecretData')
mock_encrypt.assert_called_once_with(session, 'MyPlainText', KEY_ARN)<|docstring|>Should encrypt the raw value<|endoftext|> |
f878a8cddd8347c55a7f4959436d5e245b45de308ff8d1d746cd030ac8a8a748 | @patch('aws_secrets.miscellaneous.kms.encrypt')
def test_encrypt_value_dict(mock_encrypt):
'\n Should encrypt the raw value\n '
mock_encrypt.return_value = b'SecretData'
session = boto3.Session(region_name='us-east-1')
value = {'key': 'MyPlainText'}
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': value}, cipher_text=None)
assert (entry.encrypt() == 'SecretData')
mock_encrypt.assert_called_once_with(session, json.dumps(value), KEY_ARN) | Should encrypt the raw value | tests/config/providers/secretsmanager/test_entry.py | test_encrypt_value_dict | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.encrypt')
def test_encrypt_value_dict(mock_encrypt):
'\n \n '
mock_encrypt.return_value = b'SecretData'
session = boto3.Session(region_name='us-east-1')
value = {'key': 'MyPlainText'}
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': value}, cipher_text=None)
assert (entry.encrypt() == 'SecretData')
mock_encrypt.assert_called_once_with(session, json.dumps(value), KEY_ARN) | @patch('aws_secrets.miscellaneous.kms.encrypt')
def test_encrypt_value_dict(mock_encrypt):
'\n \n '
mock_encrypt.return_value = b'SecretData'
session = boto3.Session(region_name='us-east-1')
value = {'key': 'MyPlainText'}
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': value}, cipher_text=None)
assert (entry.encrypt() == 'SecretData')
mock_encrypt.assert_called_once_with(session, json.dumps(value), KEY_ARN)<|docstring|>Should encrypt the raw value<|endoftext|> |
f5ae090994f61aff3bec4016bf244dbb4a8f7917ec5c9fbb83f0566387015cde | @patch('aws_secrets.miscellaneous.kms.encrypt')
def test_encrypt_already_encrypted(mock_encrypt):
'\n Should not encrypt the raw value when the value is already encrypted\n '
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1'}, cipher_text='SecretData')
assert (entry.encrypt() == 'SecretData')
mock_encrypt.assert_not_called() | Should not encrypt the raw value when the value is already encrypted | tests/config/providers/secretsmanager/test_entry.py | test_encrypt_already_encrypted | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.encrypt')
def test_encrypt_already_encrypted(mock_encrypt):
'\n \n '
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1'}, cipher_text='SecretData')
assert (entry.encrypt() == 'SecretData')
mock_encrypt.assert_not_called() | @patch('aws_secrets.miscellaneous.kms.encrypt')
def test_encrypt_already_encrypted(mock_encrypt):
'\n \n '
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1'}, cipher_text='SecretData')
assert (entry.encrypt() == 'SecretData')
mock_encrypt.assert_not_called()<|docstring|>Should not encrypt the raw value when the value is already encrypted<|endoftext|> |
291243aa36e754a7230ed3ae304d248bd7d3918e26698f6dc46f10d3ffdc8812 | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_decrypt(mock_decrypt):
'\n Should decrypt the cipher text\n '
mock_decrypt.return_value = b'PlainTextData'
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1'}, cipher_text='SecretData')
assert (entry.decrypt() == 'PlainTextData')
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | Should decrypt the cipher text | tests/config/providers/secretsmanager/test_entry.py | test_decrypt | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_decrypt(mock_decrypt):
'\n \n '
mock_decrypt.return_value = b'PlainTextData'
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1'}, cipher_text='SecretData')
assert (entry.decrypt() == 'PlainTextData')
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_decrypt(mock_decrypt):
'\n \n '
mock_decrypt.return_value = b'PlainTextData'
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1'}, cipher_text='SecretData')
assert (entry.decrypt() == 'PlainTextData')
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN)<|docstring|>Should decrypt the cipher text<|endoftext|> |
b6d8ae44a50af166715b3e8034f55039d6e6334bf20b7f1da42526df1b1359d0 | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_decrypt_with_value_data(mock_decrypt):
'\n Should decrypt the cipher text\n '
mock_decrypt.return_value = b'PlainTextData'
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': 'SecretData'}, cipher_text=None)
assert (entry.decrypt() == 'PlainTextData')
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | Should decrypt the cipher text | tests/config/providers/secretsmanager/test_entry.py | test_decrypt_with_value_data | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_decrypt_with_value_data(mock_decrypt):
'\n \n '
mock_decrypt.return_value = b'PlainTextData'
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': 'SecretData'}, cipher_text=None)
assert (entry.decrypt() == 'PlainTextData')
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_decrypt_with_value_data(mock_decrypt):
'\n \n '
mock_decrypt.return_value = b'PlainTextData'
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': 'SecretData'}, cipher_text=None)
assert (entry.decrypt() == 'PlainTextData')
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN)<|docstring|>Should decrypt the cipher text<|endoftext|> |
8ab4abd893436110efb123b75b598a23b54fff676e87bdb1f1167bc22558f908 | @patch('subprocess.run')
def test_decrypt_with_value_data_not_str(mock_run):
'\n Should decrypt the cipher text\n '
session = boto3.Session(region_name='us-east-1')
mock_run.return_value.stdout = 'myvalue'
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': CmdTag('value')}, cipher_text=None)
assert isinstance(entry.decrypt(), CmdTag) | Should decrypt the cipher text | tests/config/providers/secretsmanager/test_entry.py | test_decrypt_with_value_data_not_str | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('subprocess.run')
def test_decrypt_with_value_data_not_str(mock_run):
'\n \n '
session = boto3.Session(region_name='us-east-1')
mock_run.return_value.stdout = 'myvalue'
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': CmdTag('value')}, cipher_text=None)
assert isinstance(entry.decrypt(), CmdTag) | @patch('subprocess.run')
def test_decrypt_with_value_data_not_str(mock_run):
'\n \n '
session = boto3.Session(region_name='us-east-1')
mock_run.return_value.stdout = 'myvalue'
entry = SecretManagerEntry(session=session, kms_arn=KEY_ARN, data={'name': 'secret1', 'value': CmdTag('value')}, cipher_text=None)
assert isinstance(entry.decrypt(), CmdTag)<|docstring|>Should decrypt the cipher text<|endoftext|> |
f3fde0c81f924f7bb37dbb63fc08c494a5458d361375bd3d4d7dfe21d4611934 | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_create_default_kms(mock_decrypt):
'\n Should create the secret in the AWS environment\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('create_secret', {}, {'Name': name, 'Description': description, 'SecretString': 'PlainTextData', 'Tags': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
entry.create()
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | Should create the secret in the AWS environment | tests/config/providers/secretsmanager/test_entry.py | test_create_default_kms | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_create_default_kms(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('create_secret', {}, {'Name': name, 'Description': description, 'SecretString': 'PlainTextData', 'Tags': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
entry.create()
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_create_default_kms(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('create_secret', {}, {'Name': name, 'Description': description, 'SecretString': 'PlainTextData', 'Tags': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
entry.create()
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN)<|docstring|>Should create the secret in the AWS environment<|endoftext|> |
015f778e909918a0eb510c9ebab9e066bf956884279864577a08980e09a1f268 | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_create_custom_kms(mock_decrypt):
'\n Should create the secret in the AWS environment using a custom Kms key\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('create_secret', {}, {'Name': name, 'Description': description, 'SecretString': 'PlainTextData', 'KmsKeyId': KEY_ARN, 'Tags': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN})
entry.create()
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | Should create the secret in the AWS environment using a custom Kms key | tests/config/providers/secretsmanager/test_entry.py | test_create_custom_kms | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_create_custom_kms(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('create_secret', {}, {'Name': name, 'Description': description, 'SecretString': 'PlainTextData', 'KmsKeyId': KEY_ARN, 'Tags': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN})
entry.create()
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_create_custom_kms(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('create_secret', {}, {'Name': name, 'Description': description, 'SecretString': 'PlainTextData', 'KmsKeyId': KEY_ARN, 'Tags': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN})
entry.create()
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN)<|docstring|>Should create the secret in the AWS environment using a custom Kms key<|endoftext|> |
e2368d4cad9afe5af3910337d8e68d0dde38f9aea7ab124fefc3cc07d7fb322e | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_update_default_kms_without_tag_changes(mock_decrypt):
'\n Should update the secret in the AWS environment without tag changes\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('update_secret', {}, {'SecretId': name, 'Description': description, 'SecretString': 'PlainTextData'})
stubber.add_response('untag_resource', {}, {'SecretId': name, 'TagKeys': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
entry.update({'ChangesList': []})
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | Should update the secret in the AWS environment without tag changes | tests/config/providers/secretsmanager/test_entry.py | test_update_default_kms_without_tag_changes | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_update_default_kms_without_tag_changes(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('update_secret', {}, {'SecretId': name, 'Description': description, 'SecretString': 'PlainTextData'})
stubber.add_response('untag_resource', {}, {'SecretId': name, 'TagKeys': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
entry.update({'ChangesList': []})
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_update_default_kms_without_tag_changes(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('update_secret', {}, {'SecretId': name, 'Description': description, 'SecretString': 'PlainTextData'})
stubber.add_response('untag_resource', {}, {'SecretId': name, 'TagKeys': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
entry.update({'ChangesList': []})
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN)<|docstring|>Should update the secret in the AWS environment without tag changes<|endoftext|> |
b2497ad2d8d34b70c4461c79bdd646cde211362979db2f5792d4a2906449a5c7 | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_update_custom_kms_without_tag_changes(mock_decrypt):
'\n Should update the secret in the AWS environment without tag changes\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('update_secret', {}, {'SecretId': name, 'Description': description, 'SecretString': 'PlainTextData', 'KmsKeyId': KEY_ARN})
stubber.add_response('untag_resource', {}, {'SecretId': name, 'TagKeys': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN})
entry.update({'ChangesList': []})
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | Should update the secret in the AWS environment without tag changes | tests/config/providers/secretsmanager/test_entry.py | test_update_custom_kms_without_tag_changes | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_update_custom_kms_without_tag_changes(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('update_secret', {}, {'SecretId': name, 'Description': description, 'SecretString': 'PlainTextData', 'KmsKeyId': KEY_ARN})
stubber.add_response('untag_resource', {}, {'SecretId': name, 'TagKeys': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN})
entry.update({'ChangesList': []})
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_update_custom_kms_without_tag_changes(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('update_secret', {}, {'SecretId': name, 'Description': description, 'SecretString': 'PlainTextData', 'KmsKeyId': KEY_ARN})
stubber.add_response('untag_resource', {}, {'SecretId': name, 'TagKeys': []})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN})
entry.update({'ChangesList': []})
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN)<|docstring|>Should update the secret in the AWS environment without tag changes<|endoftext|> |
ba5a977feabba70529e5cd30a84f26f647e90c43d45a6a9b1bb00766718ed601 | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_update_default_kms_with_tag_changes(mock_decrypt):
'\n Should update the secret in the AWS environment with tag changes\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('update_secret', {}, {'SecretId': name, 'Description': description, 'SecretString': 'PlainTextData'})
stubber.add_response('untag_resource', {}, {'SecretId': name, 'TagKeys': ['Test']})
stubber.add_response('tag_resource', {}, {'SecretId': name, 'Tags': [{'Key': 'Test', 'Value': 'New'}]})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'tags': {'Test': 'New'}})
entry.update({'ChangesList': [{'Key': 'Tags', 'OldValue': [{'Key': 'Test', 'Value': 'Old'}]}]})
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | Should update the secret in the AWS environment with tag changes | tests/config/providers/secretsmanager/test_entry.py | test_update_default_kms_with_tag_changes | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_update_default_kms_with_tag_changes(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('update_secret', {}, {'SecretId': name, 'Description': description, 'SecretString': 'PlainTextData'})
stubber.add_response('untag_resource', {}, {'SecretId': name, 'TagKeys': ['Test']})
stubber.add_response('tag_resource', {}, {'SecretId': name, 'Tags': [{'Key': 'Test', 'Value': 'New'}]})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'tags': {'Test': 'New'}})
entry.update({'ChangesList': [{'Key': 'Tags', 'OldValue': [{'Key': 'Test', 'Value': 'Old'}]}]})
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN) | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_update_default_kms_with_tag_changes(mock_decrypt):
'\n \n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('update_secret', {}, {'SecretId': name, 'Description': description, 'SecretString': 'PlainTextData'})
stubber.add_response('untag_resource', {}, {'SecretId': name, 'TagKeys': ['Test']})
stubber.add_response('tag_resource', {}, {'SecretId': name, 'Tags': [{'Key': 'Test', 'Value': 'New'}]})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'tags': {'Test': 'New'}})
entry.update({'ChangesList': [{'Key': 'Tags', 'OldValue': [{'Key': 'Test', 'Value': 'Old'}]}]})
assert True
mock_decrypt.assert_called_once_with(session, 'SecretData', KEY_ARN)<|docstring|>Should update the secret in the AWS environment with tag changes<|endoftext|> |
46e2cfc85dbfac5ed849d7f8c47f3dde787d51750050963ca1efa71846dca0e7 | def test_delete():
'\n Should delete the secret in the AWS environment\n '
client = boto3.client('secretsmanager')
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('delete_secret', {}, {'SecretId': name, 'ForceDeleteWithoutRecovery': True})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
entry.delete()
assert True | Should delete the secret in the AWS environment | tests/config/providers/secretsmanager/test_entry.py | test_delete | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | def test_delete():
'\n \n '
client = boto3.client('secretsmanager')
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('delete_secret', {}, {'SecretId': name, 'ForceDeleteWithoutRecovery': True})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
entry.delete()
assert True | def test_delete():
'\n \n '
client = boto3.client('secretsmanager')
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('delete_secret', {}, {'SecretId': name, 'ForceDeleteWithoutRecovery': True})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
entry.delete()
assert True<|docstring|>Should delete the secret in the AWS environment<|endoftext|> |
ae7296a5a95ddf503ac201b8043d64bfc719282bc97011e39c35ea70904e9525 | def test_calculate_changes_with_parameter_not_exists():
'\n Should calculate the changes between AWS resource and local resource\n\n Scenario:\n - secret not exists\n '
client = boto3.client('secretsmanager')
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('list_secrets', {'SecretList': []}, {'Filters': [{'Key': 'name', 'Values': [name]}]})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
assert (entry.changes() == {'Exists': False, 'ChangesList': []}) | Should calculate the changes between AWS resource and local resource
Scenario:
- secret not exists | tests/config/providers/secretsmanager/test_entry.py | test_calculate_changes_with_parameter_not_exists | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | def test_calculate_changes_with_parameter_not_exists():
'\n Should calculate the changes between AWS resource and local resource\n\n Scenario:\n - secret not exists\n '
client = boto3.client('secretsmanager')
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('list_secrets', {'SecretList': []}, {'Filters': [{'Key': 'name', 'Values': [name]}]})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
assert (entry.changes() == {'Exists': False, 'ChangesList': []}) | def test_calculate_changes_with_parameter_not_exists():
'\n Should calculate the changes between AWS resource and local resource\n\n Scenario:\n - secret not exists\n '
client = boto3.client('secretsmanager')
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('list_secrets', {'SecretList': []}, {'Filters': [{'Key': 'name', 'Values': [name]}]})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description})
assert (entry.changes() == {'Exists': False, 'ChangesList': []})<|docstring|>Should calculate the changes between AWS resource and local resource
Scenario:
- secret not exists<|endoftext|> |
25f11c0fc3b46dbc8675e7634f1db21f13e8d59c3a0d07fd17dc22d0d2e3e439 | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_calculate_changes_with_changes(mock_decrypt):
'\n Should calculate the changes between AWS resource and local resource\n\n Scenario:\n - Parameter exists\n - Value changed\n - Description changed\n - Kms changed\n - Tags changed\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('list_secrets', {'SecretList': [{'ARN': SECRET_ARN, 'Name': name, 'Description': f'{description} CHANGED', 'KmsKeyId': KEY_ARN1, 'Tags': [{'Key': 'Test', 'Value': 'Old'}]}]}, {'Filters': [{'Key': 'name', 'Values': [name]}]})
stubber.add_response('get_secret_value', {'SecretString': 'AWSData'}, {'SecretId': name})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN, 'tags': {'Test': 'New'}})
assert (entry.changes() == {'Exists': True, 'ChangesList': [{'HasChanges': True, 'Key': 'Value', 'OldValue': 'AWSData', 'Replaceable': True, 'Value': 'PlainTextData'}, {'HasChanges': True, 'Key': 'Description', 'OldValue': 'secret description CHANGED', 'Replaceable': True, 'Value': 'secret description'}, {'HasChanges': True, 'Key': 'KmsKeyId', 'OldValue': KEY_ARN1, 'Replaceable': True, 'Value': KEY_ARN}, {'HasChanges': True, 'Key': 'Tags', 'OldValue': [{'Key': 'Test', 'Value': 'Old'}], 'Replaceable': True, 'Value': [{'Key': 'Test', 'Value': 'New'}]}]}) | Should calculate the changes between AWS resource and local resource
Scenario:
- Parameter exists
- Value changed
- Description changed
- Kms changed
- Tags changed | tests/config/providers/secretsmanager/test_entry.py | test_calculate_changes_with_changes | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_calculate_changes_with_changes(mock_decrypt):
'\n Should calculate the changes between AWS resource and local resource\n\n Scenario:\n - Parameter exists\n - Value changed\n - Description changed\n - Kms changed\n - Tags changed\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('list_secrets', {'SecretList': [{'ARN': SECRET_ARN, 'Name': name, 'Description': f'{description} CHANGED', 'KmsKeyId': KEY_ARN1, 'Tags': [{'Key': 'Test', 'Value': 'Old'}]}]}, {'Filters': [{'Key': 'name', 'Values': [name]}]})
stubber.add_response('get_secret_value', {'SecretString': 'AWSData'}, {'SecretId': name})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN, 'tags': {'Test': 'New'}})
assert (entry.changes() == {'Exists': True, 'ChangesList': [{'HasChanges': True, 'Key': 'Value', 'OldValue': 'AWSData', 'Replaceable': True, 'Value': 'PlainTextData'}, {'HasChanges': True, 'Key': 'Description', 'OldValue': 'secret description CHANGED', 'Replaceable': True, 'Value': 'secret description'}, {'HasChanges': True, 'Key': 'KmsKeyId', 'OldValue': KEY_ARN1, 'Replaceable': True, 'Value': KEY_ARN}, {'HasChanges': True, 'Key': 'Tags', 'OldValue': [{'Key': 'Test', 'Value': 'Old'}], 'Replaceable': True, 'Value': [{'Key': 'Test', 'Value': 'New'}]}]}) | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_calculate_changes_with_changes(mock_decrypt):
'\n Should calculate the changes between AWS resource and local resource\n\n Scenario:\n - Parameter exists\n - Value changed\n - Description changed\n - Kms changed\n - Tags changed\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('list_secrets', {'SecretList': [{'ARN': SECRET_ARN, 'Name': name, 'Description': f'{description} CHANGED', 'KmsKeyId': KEY_ARN1, 'Tags': [{'Key': 'Test', 'Value': 'Old'}]}]}, {'Filters': [{'Key': 'name', 'Values': [name]}]})
stubber.add_response('get_secret_value', {'SecretString': 'AWSData'}, {'SecretId': name})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN, 'tags': {'Test': 'New'}})
assert (entry.changes() == {'Exists': True, 'ChangesList': [{'HasChanges': True, 'Key': 'Value', 'OldValue': 'AWSData', 'Replaceable': True, 'Value': 'PlainTextData'}, {'HasChanges': True, 'Key': 'Description', 'OldValue': 'secret description CHANGED', 'Replaceable': True, 'Value': 'secret description'}, {'HasChanges': True, 'Key': 'KmsKeyId', 'OldValue': KEY_ARN1, 'Replaceable': True, 'Value': KEY_ARN}, {'HasChanges': True, 'Key': 'Tags', 'OldValue': [{'Key': 'Test', 'Value': 'Old'}], 'Replaceable': True, 'Value': [{'Key': 'Test', 'Value': 'New'}]}]})<|docstring|>Should calculate the changes between AWS resource and local resource
Scenario:
- Parameter exists
- Value changed
- Description changed
- Kms changed
- Tags changed<|endoftext|> |
fd1ece962b3f7816b1855c835fd7a64b44b7534fbb0ca0b3688c22481eb74a40 | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_calculate_changes_without_changes(mock_decrypt):
'\n Should calculate the changes between AWS resource and local resource\n\n Scenario:\n - secret exists\n - Value not changed\n - Description not changed\n - Type not changed\n - Kms not changed\n - Tags not changed\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('list_secrets', {'SecretList': [{'ARN': SECRET_ARN, 'Name': name, 'Description': description, 'KmsKeyId': KEY_ARN, 'Tags': [{'Key': 'Test', 'Value': 'New'}]}]}, {'Filters': [{'Key': 'name', 'Values': [name]}]})
stubber.add_response('get_secret_value', {'SecretString': 'PlainTextData'}, {'SecretId': name})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN, 'tags': {'Test': 'New'}})
assert (entry.changes() == {'Exists': True, 'ChangesList': []}) | Should calculate the changes between AWS resource and local resource
Scenario:
- secret exists
- Value not changed
- Description not changed
- Type not changed
- Kms not changed
- Tags not changed | tests/config/providers/secretsmanager/test_entry.py | test_calculate_changes_without_changes | lucasvieirasilva/aws-ssm-secrets-cli | 4 | python | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_calculate_changes_without_changes(mock_decrypt):
'\n Should calculate the changes between AWS resource and local resource\n\n Scenario:\n - secret exists\n - Value not changed\n - Description not changed\n - Type not changed\n - Kms not changed\n - Tags not changed\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('list_secrets', {'SecretList': [{'ARN': SECRET_ARN, 'Name': name, 'Description': description, 'KmsKeyId': KEY_ARN, 'Tags': [{'Key': 'Test', 'Value': 'New'}]}]}, {'Filters': [{'Key': 'name', 'Values': [name]}]})
stubber.add_response('get_secret_value', {'SecretString': 'PlainTextData'}, {'SecretId': name})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN, 'tags': {'Test': 'New'}})
assert (entry.changes() == {'Exists': True, 'ChangesList': []}) | @patch('aws_secrets.miscellaneous.kms.decrypt')
def test_calculate_changes_without_changes(mock_decrypt):
'\n Should calculate the changes between AWS resource and local resource\n\n Scenario:\n - secret exists\n - Value not changed\n - Description not changed\n - Type not changed\n - Kms not changed\n - Tags not changed\n '
client = boto3.client('secretsmanager')
mock_decrypt.return_value = b'PlainTextData'
with patch.object(boto3.Session, 'client') as mock_client:
with Stubber(client) as stubber:
name = 'secret1'
description = 'secret description'
mock_client.return_value = client
stubber.add_response('list_secrets', {'SecretList': [{'ARN': SECRET_ARN, 'Name': name, 'Description': description, 'KmsKeyId': KEY_ARN, 'Tags': [{'Key': 'Test', 'Value': 'New'}]}]}, {'Filters': [{'Key': 'name', 'Values': [name]}]})
stubber.add_response('get_secret_value', {'SecretString': 'PlainTextData'}, {'SecretId': name})
session = boto3.Session(region_name='us-east-1')
entry = SecretManagerEntry(session=session, cipher_text='SecretData', kms_arn=KEY_ARN, data={'name': name, 'description': description, 'kms': KEY_ARN, 'tags': {'Test': 'New'}})
assert (entry.changes() == {'Exists': True, 'ChangesList': []})<|docstring|>Should calculate the changes between AWS resource and local resource
Scenario:
- secret exists
- Value not changed
- Description not changed
- Type not changed
- Kms not changed
- Tags not changed<|endoftext|> |
de723a214cea7c1d08c275ec3ee4a13904220a35513dc6a3987da21d73a2d2f8 | def msg_help(self, text):
'\n messages text to the caller, adding an extra oob argument to indicate\n that this is a help command result and could be rendered in a separate\n help window\n '
if type(self).help_more:
usemore = True
if (self.session and (self.session.protocol_key in ('websocket', 'ajax/comet'))):
try:
options = self.account.db._saved_webclient_options
if (options and options['helppopup']):
usemore = False
except KeyError:
pass
if usemore:
evmore.msg(self.caller, text, session=self.session)
return
self.msg(text=(text, {'type': 'help'})) | messages text to the caller, adding an extra oob argument to indicate
that this is a help command result and could be rendered in a separate
help window | evennia/commands/default/help.py | msg_help | dscanlen/evennia | 1,544 | python | def msg_help(self, text):
'\n messages text to the caller, adding an extra oob argument to indicate\n that this is a help command result and could be rendered in a separate\n help window\n '
if type(self).help_more:
usemore = True
if (self.session and (self.session.protocol_key in ('websocket', 'ajax/comet'))):
try:
options = self.account.db._saved_webclient_options
if (options and options['helppopup']):
usemore = False
except KeyError:
pass
if usemore:
evmore.msg(self.caller, text, session=self.session)
return
self.msg(text=(text, {'type': 'help'})) | def msg_help(self, text):
'\n messages text to the caller, adding an extra oob argument to indicate\n that this is a help command result and could be rendered in a separate\n help window\n '
if type(self).help_more:
usemore = True
if (self.session and (self.session.protocol_key in ('websocket', 'ajax/comet'))):
try:
options = self.account.db._saved_webclient_options
if (options and options['helppopup']):
usemore = False
except KeyError:
pass
if usemore:
evmore.msg(self.caller, text, session=self.session)
return
self.msg(text=(text, {'type': 'help'}))<|docstring|>messages text to the caller, adding an extra oob argument to indicate
that this is a help command result and could be rendered in a separate
help window<|endoftext|> |
c8ab96c4440b60242652e38d530af4be8553282e6040e8d613b6d5cd8c90ed5a | @staticmethod
def format_help_entry(title, help_text, aliases=None, suggested=None):
'\n This visually formats the help entry.\n This method can be overriden to customize the way a help\n entry is displayed.\n\n Args:\n title (str): the title of the help entry.\n help_text (str): the text of the help entry.\n aliases (list of str or None): the list of aliases.\n suggested (list of str or None): suggested reading.\n\n Returns the formatted string, ready to be sent.\n\n '
string = (_SEP + '\n')
if title:
string += ('|CHelp for |w%s|n' % title)
if aliases:
string += (' |C(aliases: %s|C)|n' % '|C,|n '.join((('|w%s|n' % ali) for ali in aliases)))
if help_text:
string += ('\n%s' % dedent(help_text.rstrip()))
if suggested:
string += '\n\n|CSuggested:|n '
string += ('%s' % fill('|C,|n '.join((('|w%s|n' % sug) for sug in suggested))))
string.strip()
string += ('\n' + _SEP)
return string | This visually formats the help entry.
This method can be overriden to customize the way a help
entry is displayed.
Args:
title (str): the title of the help entry.
help_text (str): the text of the help entry.
aliases (list of str or None): the list of aliases.
suggested (list of str or None): suggested reading.
Returns the formatted string, ready to be sent. | evennia/commands/default/help.py | format_help_entry | dscanlen/evennia | 1,544 | python | @staticmethod
def format_help_entry(title, help_text, aliases=None, suggested=None):
'\n This visually formats the help entry.\n This method can be overriden to customize the way a help\n entry is displayed.\n\n Args:\n title (str): the title of the help entry.\n help_text (str): the text of the help entry.\n aliases (list of str or None): the list of aliases.\n suggested (list of str or None): suggested reading.\n\n Returns the formatted string, ready to be sent.\n\n '
string = (_SEP + '\n')
if title:
string += ('|CHelp for |w%s|n' % title)
if aliases:
string += (' |C(aliases: %s|C)|n' % '|C,|n '.join((('|w%s|n' % ali) for ali in aliases)))
if help_text:
string += ('\n%s' % dedent(help_text.rstrip()))
if suggested:
string += '\n\n|CSuggested:|n '
string += ('%s' % fill('|C,|n '.join((('|w%s|n' % sug) for sug in suggested))))
string.strip()
string += ('\n' + _SEP)
return string | @staticmethod
def format_help_entry(title, help_text, aliases=None, suggested=None):
'\n This visually formats the help entry.\n This method can be overriden to customize the way a help\n entry is displayed.\n\n Args:\n title (str): the title of the help entry.\n help_text (str): the text of the help entry.\n aliases (list of str or None): the list of aliases.\n suggested (list of str or None): suggested reading.\n\n Returns the formatted string, ready to be sent.\n\n '
string = (_SEP + '\n')
if title:
string += ('|CHelp for |w%s|n' % title)
if aliases:
string += (' |C(aliases: %s|C)|n' % '|C,|n '.join((('|w%s|n' % ali) for ali in aliases)))
if help_text:
string += ('\n%s' % dedent(help_text.rstrip()))
if suggested:
string += '\n\n|CSuggested:|n '
string += ('%s' % fill('|C,|n '.join((('|w%s|n' % sug) for sug in suggested))))
string.strip()
string += ('\n' + _SEP)
return string<|docstring|>This visually formats the help entry.
This method can be overriden to customize the way a help
entry is displayed.
Args:
title (str): the title of the help entry.
help_text (str): the text of the help entry.
aliases (list of str or None): the list of aliases.
suggested (list of str or None): suggested reading.
Returns the formatted string, ready to be sent.<|endoftext|> |
9154926f6b2797f8d594034f4f198d6017e79dfe5c2606faa4a80bf0b77f6c90 | @staticmethod
def format_help_list(hdict_cmds, hdict_db):
'\n Output a category-ordered list. The input are the\n pre-loaded help files for commands and database-helpfiles\n respectively. You can override this method to return a\n custom display of the list of commands and topics.\n '
string = ''
if (hdict_cmds and any(hdict_cmds.values())):
string += ((('\n' + _SEP) + '\n |CCommand help entries|n\n') + _SEP)
for category in sorted(hdict_cmds.keys()):
string += ('\n |w%s|n:\n' % str(category).title())
string += (('|G' + fill('|C, |G'.join(sorted(hdict_cmds[category])))) + '|n')
if (hdict_db and any(hdict_db.values())):
string += ((('\n\n' + _SEP) + '\n\r |COther help entries|n\n') + _SEP)
for category in sorted(hdict_db.keys()):
string += ('\n\r |w%s|n:\n' % str(category).title())
string += (('|G' + fill(', '.join(sorted([str(topic) for topic in hdict_db[category]])))) + '|n')
return string | Output a category-ordered list. The input are the
pre-loaded help files for commands and database-helpfiles
respectively. You can override this method to return a
custom display of the list of commands and topics. | evennia/commands/default/help.py | format_help_list | dscanlen/evennia | 1,544 | python | @staticmethod
def format_help_list(hdict_cmds, hdict_db):
'\n Output a category-ordered list. The input are the\n pre-loaded help files for commands and database-helpfiles\n respectively. You can override this method to return a\n custom display of the list of commands and topics.\n '
string =
if (hdict_cmds and any(hdict_cmds.values())):
string += ((('\n' + _SEP) + '\n |CCommand help entries|n\n') + _SEP)
for category in sorted(hdict_cmds.keys()):
string += ('\n |w%s|n:\n' % str(category).title())
string += (('|G' + fill('|C, |G'.join(sorted(hdict_cmds[category])))) + '|n')
if (hdict_db and any(hdict_db.values())):
string += ((('\n\n' + _SEP) + '\n\r |COther help entries|n\n') + _SEP)
for category in sorted(hdict_db.keys()):
string += ('\n\r |w%s|n:\n' % str(category).title())
string += (('|G' + fill(', '.join(sorted([str(topic) for topic in hdict_db[category]])))) + '|n')
return string | @staticmethod
def format_help_list(hdict_cmds, hdict_db):
'\n Output a category-ordered list. The input are the\n pre-loaded help files for commands and database-helpfiles\n respectively. You can override this method to return a\n custom display of the list of commands and topics.\n '
string =
if (hdict_cmds and any(hdict_cmds.values())):
string += ((('\n' + _SEP) + '\n |CCommand help entries|n\n') + _SEP)
for category in sorted(hdict_cmds.keys()):
string += ('\n |w%s|n:\n' % str(category).title())
string += (('|G' + fill('|C, |G'.join(sorted(hdict_cmds[category])))) + '|n')
if (hdict_db and any(hdict_db.values())):
string += ((('\n\n' + _SEP) + '\n\r |COther help entries|n\n') + _SEP)
for category in sorted(hdict_db.keys()):
string += ('\n\r |w%s|n:\n' % str(category).title())
string += (('|G' + fill(', '.join(sorted([str(topic) for topic in hdict_db[category]])))) + '|n')
return string<|docstring|>Output a category-ordered list. The input are the
pre-loaded help files for commands and database-helpfiles
respectively. You can override this method to return a
custom display of the list of commands and topics.<|endoftext|> |
f66bfada0d8a48be681e41fb8f6e932ef02b6c720dbeccb487daf12dd2105193 | def check_show_help(self, cmd, caller):
"\n Helper method. If this return True, the given cmd\n auto-help will be viewable in the help listing.\n Override this to easily select what is shown to\n the account. Note that only commands available\n in the caller's merged cmdset are available.\n\n Args:\n cmd (Command): Command class from the merged cmdset\n caller (Character, Account or Session): The current caller\n executing the help command.\n\n "
return (cmd.auto_help and cmd.access(caller)) | Helper method. If this return True, the given cmd
auto-help will be viewable in the help listing.
Override this to easily select what is shown to
the account. Note that only commands available
in the caller's merged cmdset are available.
Args:
cmd (Command): Command class from the merged cmdset
caller (Character, Account or Session): The current caller
executing the help command. | evennia/commands/default/help.py | check_show_help | dscanlen/evennia | 1,544 | python | def check_show_help(self, cmd, caller):
"\n Helper method. If this return True, the given cmd\n auto-help will be viewable in the help listing.\n Override this to easily select what is shown to\n the account. Note that only commands available\n in the caller's merged cmdset are available.\n\n Args:\n cmd (Command): Command class from the merged cmdset\n caller (Character, Account or Session): The current caller\n executing the help command.\n\n "
return (cmd.auto_help and cmd.access(caller)) | def check_show_help(self, cmd, caller):
"\n Helper method. If this return True, the given cmd\n auto-help will be viewable in the help listing.\n Override this to easily select what is shown to\n the account. Note that only commands available\n in the caller's merged cmdset are available.\n\n Args:\n cmd (Command): Command class from the merged cmdset\n caller (Character, Account or Session): The current caller\n executing the help command.\n\n "
return (cmd.auto_help and cmd.access(caller))<|docstring|>Helper method. If this return True, the given cmd
auto-help will be viewable in the help listing.
Override this to easily select what is shown to
the account. Note that only commands available
in the caller's merged cmdset are available.
Args:
cmd (Command): Command class from the merged cmdset
caller (Character, Account or Session): The current caller
executing the help command.<|endoftext|> |
46a3bb7d718683aacd992c761f46652a8b87c74fed947660e31865333c089f86 | def should_list_cmd(self, cmd, caller):
'\n Should the specified command appear in the help table?\n\n This method only checks whether a specified command should\n appear in the table of topics/commands. The command can be\n used by the caller (see the \'check_show_help\' method) and\n the command will still be available, for instance, if a\n character type \'help name of the command\'. However, if\n you return False, the specified command will not appear in\n the table. This is sometimes useful to "hide" commands in\n the table, but still access them through the help system.\n\n Args:\n cmd: the command to be tested.\n caller: the caller of the help system.\n\n Return:\n True: the command should appear in the table.\n False: the command shouldn\'t appear in the table.\n\n '
return cmd.access(caller, 'view', default=True) | Should the specified command appear in the help table?
This method only checks whether a specified command should
appear in the table of topics/commands. The command can be
used by the caller (see the 'check_show_help' method) and
the command will still be available, for instance, if a
character type 'help name of the command'. However, if
you return False, the specified command will not appear in
the table. This is sometimes useful to "hide" commands in
the table, but still access them through the help system.
Args:
cmd: the command to be tested.
caller: the caller of the help system.
Return:
True: the command should appear in the table.
False: the command shouldn't appear in the table. | evennia/commands/default/help.py | should_list_cmd | dscanlen/evennia | 1,544 | python | def should_list_cmd(self, cmd, caller):
'\n Should the specified command appear in the help table?\n\n This method only checks whether a specified command should\n appear in the table of topics/commands. The command can be\n used by the caller (see the \'check_show_help\' method) and\n the command will still be available, for instance, if a\n character type \'help name of the command\'. However, if\n you return False, the specified command will not appear in\n the table. This is sometimes useful to "hide" commands in\n the table, but still access them through the help system.\n\n Args:\n cmd: the command to be tested.\n caller: the caller of the help system.\n\n Return:\n True: the command should appear in the table.\n False: the command shouldn\'t appear in the table.\n\n '
return cmd.access(caller, 'view', default=True) | def should_list_cmd(self, cmd, caller):
'\n Should the specified command appear in the help table?\n\n This method only checks whether a specified command should\n appear in the table of topics/commands. The command can be\n used by the caller (see the \'check_show_help\' method) and\n the command will still be available, for instance, if a\n character type \'help name of the command\'. However, if\n you return False, the specified command will not appear in\n the table. This is sometimes useful to "hide" commands in\n the table, but still access them through the help system.\n\n Args:\n cmd: the command to be tested.\n caller: the caller of the help system.\n\n Return:\n True: the command should appear in the table.\n False: the command shouldn\'t appear in the table.\n\n '
return cmd.access(caller, 'view', default=True)<|docstring|>Should the specified command appear in the help table?
This method only checks whether a specified command should
appear in the table of topics/commands. The command can be
used by the caller (see the 'check_show_help' method) and
the command will still be available, for instance, if a
character type 'help name of the command'. However, if
you return False, the specified command will not appear in
the table. This is sometimes useful to "hide" commands in
the table, but still access them through the help system.
Args:
cmd: the command to be tested.
caller: the caller of the help system.
Return:
True: the command should appear in the table.
False: the command shouldn't appear in the table.<|endoftext|> |
3cace744b8e079926e1907b6363edad6125888e93e0c8e330437f8c6c7d7e800 | def parse(self):
'\n input is a string containing the command or topic to match.\n '
self.original_args = self.args.strip()
self.args = self.args.strip().lower() | input is a string containing the command or topic to match. | evennia/commands/default/help.py | parse | dscanlen/evennia | 1,544 | python | def parse(self):
'\n \n '
self.original_args = self.args.strip()
self.args = self.args.strip().lower() | def parse(self):
'\n \n '
self.original_args = self.args.strip()
self.args = self.args.strip().lower()<|docstring|>input is a string containing the command or topic to match.<|endoftext|> |
29ab6845c17589c9e29ca45719effe2698a7367e09c0999637a812d24eadcfa6 | def func(self):
'\n Run the dynamic help entry creator.\n '
(query, cmdset) = (self.args, self.cmdset)
caller = self.caller
suggestion_cutoff = self.suggestion_cutoff
suggestion_maxnum = self.suggestion_maxnum
if (not query):
query = 'all'
cmdset.make_unique(caller)
all_cmds = [cmd for cmd in cmdset if self.check_show_help(cmd, caller)]
all_topics = [topic for topic in HelpEntry.objects.all() if topic.access(caller, 'view', default=True)]
all_categories = list(set(([cmd.help_category.lower() for cmd in all_cmds] + [topic.help_category.lower() for topic in all_topics])))
if (query in ('list', 'all')):
hdict_cmd = defaultdict(list)
hdict_topic = defaultdict(list)
for cmd in all_cmds:
if self.should_list_cmd(cmd, caller):
key = (cmd.auto_help_display_key if hasattr(cmd, 'auto_help_display_key') else cmd.key)
hdict_cmd[cmd.help_category].append(key)
[hdict_topic[topic.help_category].append(topic.key) for topic in all_topics]
self.msg_help(self.format_help_list(hdict_cmd, hdict_topic))
return
suggestions = None
if (suggestion_maxnum > 0):
vocabulary = (([cmd.key for cmd in all_cmds if cmd] + [topic.key for topic in all_topics]) + all_categories)
[vocabulary.extend(cmd.aliases) for cmd in all_cmds]
suggestions = [sugg for sugg in string_suggestions(query, set(vocabulary), cutoff=suggestion_cutoff, maxnum=suggestion_maxnum) if (sugg != query)]
if (not suggestions):
suggestions = [sugg for sugg in vocabulary if ((sugg != query) and sugg.startswith(query))]
match = [cmd for cmd in all_cmds if (cmd == query)]
if (not match):
_query = (query[1:] if (query[0] in CMD_IGNORE_PREFIXES) else query)
match = [cmd for cmd in all_cmds for m in cmd._matchset if ((m == _query) or ((m[0] in CMD_IGNORE_PREFIXES) and (m[1:] == _query)))]
if (len(match) == 1):
cmd = match[0]
key = (cmd.auto_help_display_key if hasattr(cmd, 'auto_help_display_key') else cmd.key)
formatted = self.format_help_entry(key, cmd.get_help(caller, cmdset), aliases=cmd.aliases, suggested=suggestions)
self.msg_help(formatted)
return
match = list(HelpEntry.objects.find_topicmatch(query, exact=True))
if (len(match) == 1):
formatted = self.format_help_entry(match[0].key, match[0].entrytext, aliases=match[0].aliases.all(), suggested=suggestions)
self.msg_help(formatted)
return
if (query in all_categories):
self.msg_help(self.format_help_list({query: [(cmd.auto_help_display_key if hasattr(cmd, 'auto_help_display_key') else cmd.key) for cmd in all_cmds if (cmd.help_category == query)]}, {query: [topic.key for topic in all_topics if (topic.help_category == query)]}))
return
self.msg(self.format_help_entry('', f"No help entry found for '{query}'", None, suggested=suggestions), options={'type': 'help'}) | Run the dynamic help entry creator. | evennia/commands/default/help.py | func | dscanlen/evennia | 1,544 | python | def func(self):
'\n \n '
(query, cmdset) = (self.args, self.cmdset)
caller = self.caller
suggestion_cutoff = self.suggestion_cutoff
suggestion_maxnum = self.suggestion_maxnum
if (not query):
query = 'all'
cmdset.make_unique(caller)
all_cmds = [cmd for cmd in cmdset if self.check_show_help(cmd, caller)]
all_topics = [topic for topic in HelpEntry.objects.all() if topic.access(caller, 'view', default=True)]
all_categories = list(set(([cmd.help_category.lower() for cmd in all_cmds] + [topic.help_category.lower() for topic in all_topics])))
if (query in ('list', 'all')):
hdict_cmd = defaultdict(list)
hdict_topic = defaultdict(list)
for cmd in all_cmds:
if self.should_list_cmd(cmd, caller):
key = (cmd.auto_help_display_key if hasattr(cmd, 'auto_help_display_key') else cmd.key)
hdict_cmd[cmd.help_category].append(key)
[hdict_topic[topic.help_category].append(topic.key) for topic in all_topics]
self.msg_help(self.format_help_list(hdict_cmd, hdict_topic))
return
suggestions = None
if (suggestion_maxnum > 0):
vocabulary = (([cmd.key for cmd in all_cmds if cmd] + [topic.key for topic in all_topics]) + all_categories)
[vocabulary.extend(cmd.aliases) for cmd in all_cmds]
suggestions = [sugg for sugg in string_suggestions(query, set(vocabulary), cutoff=suggestion_cutoff, maxnum=suggestion_maxnum) if (sugg != query)]
if (not suggestions):
suggestions = [sugg for sugg in vocabulary if ((sugg != query) and sugg.startswith(query))]
match = [cmd for cmd in all_cmds if (cmd == query)]
if (not match):
_query = (query[1:] if (query[0] in CMD_IGNORE_PREFIXES) else query)
match = [cmd for cmd in all_cmds for m in cmd._matchset if ((m == _query) or ((m[0] in CMD_IGNORE_PREFIXES) and (m[1:] == _query)))]
if (len(match) == 1):
cmd = match[0]
key = (cmd.auto_help_display_key if hasattr(cmd, 'auto_help_display_key') else cmd.key)
formatted = self.format_help_entry(key, cmd.get_help(caller, cmdset), aliases=cmd.aliases, suggested=suggestions)
self.msg_help(formatted)
return
match = list(HelpEntry.objects.find_topicmatch(query, exact=True))
if (len(match) == 1):
formatted = self.format_help_entry(match[0].key, match[0].entrytext, aliases=match[0].aliases.all(), suggested=suggestions)
self.msg_help(formatted)
return
if (query in all_categories):
self.msg_help(self.format_help_list({query: [(cmd.auto_help_display_key if hasattr(cmd, 'auto_help_display_key') else cmd.key) for cmd in all_cmds if (cmd.help_category == query)]}, {query: [topic.key for topic in all_topics if (topic.help_category == query)]}))
return
self.msg(self.format_help_entry(, f"No help entry found for '{query}'", None, suggested=suggestions), options={'type': 'help'}) | def func(self):
'\n \n '
(query, cmdset) = (self.args, self.cmdset)
caller = self.caller
suggestion_cutoff = self.suggestion_cutoff
suggestion_maxnum = self.suggestion_maxnum
if (not query):
query = 'all'
cmdset.make_unique(caller)
all_cmds = [cmd for cmd in cmdset if self.check_show_help(cmd, caller)]
all_topics = [topic for topic in HelpEntry.objects.all() if topic.access(caller, 'view', default=True)]
all_categories = list(set(([cmd.help_category.lower() for cmd in all_cmds] + [topic.help_category.lower() for topic in all_topics])))
if (query in ('list', 'all')):
hdict_cmd = defaultdict(list)
hdict_topic = defaultdict(list)
for cmd in all_cmds:
if self.should_list_cmd(cmd, caller):
key = (cmd.auto_help_display_key if hasattr(cmd, 'auto_help_display_key') else cmd.key)
hdict_cmd[cmd.help_category].append(key)
[hdict_topic[topic.help_category].append(topic.key) for topic in all_topics]
self.msg_help(self.format_help_list(hdict_cmd, hdict_topic))
return
suggestions = None
if (suggestion_maxnum > 0):
vocabulary = (([cmd.key for cmd in all_cmds if cmd] + [topic.key for topic in all_topics]) + all_categories)
[vocabulary.extend(cmd.aliases) for cmd in all_cmds]
suggestions = [sugg for sugg in string_suggestions(query, set(vocabulary), cutoff=suggestion_cutoff, maxnum=suggestion_maxnum) if (sugg != query)]
if (not suggestions):
suggestions = [sugg for sugg in vocabulary if ((sugg != query) and sugg.startswith(query))]
match = [cmd for cmd in all_cmds if (cmd == query)]
if (not match):
_query = (query[1:] if (query[0] in CMD_IGNORE_PREFIXES) else query)
match = [cmd for cmd in all_cmds for m in cmd._matchset if ((m == _query) or ((m[0] in CMD_IGNORE_PREFIXES) and (m[1:] == _query)))]
if (len(match) == 1):
cmd = match[0]
key = (cmd.auto_help_display_key if hasattr(cmd, 'auto_help_display_key') else cmd.key)
formatted = self.format_help_entry(key, cmd.get_help(caller, cmdset), aliases=cmd.aliases, suggested=suggestions)
self.msg_help(formatted)
return
match = list(HelpEntry.objects.find_topicmatch(query, exact=True))
if (len(match) == 1):
formatted = self.format_help_entry(match[0].key, match[0].entrytext, aliases=match[0].aliases.all(), suggested=suggestions)
self.msg_help(formatted)
return
if (query in all_categories):
self.msg_help(self.format_help_list({query: [(cmd.auto_help_display_key if hasattr(cmd, 'auto_help_display_key') else cmd.key) for cmd in all_cmds if (cmd.help_category == query)]}, {query: [topic.key for topic in all_topics if (topic.help_category == query)]}))
return
self.msg(self.format_help_entry(, f"No help entry found for '{query}'", None, suggested=suggestions), options={'type': 'help'})<|docstring|>Run the dynamic help entry creator.<|endoftext|> |
c4fb9d6521037a755984106232593f6ae2396490d783920a7455b76fb232fdcb | def func(self):
'Implement the function'
switches = self.switches
lhslist = self.lhslist
if (not self.args):
self.msg('Usage: sethelp[/switches] <topic>[;alias;alias][,category[,locks,..] = <text>')
return
nlist = len(lhslist)
topicstr = (lhslist[0] if (nlist > 0) else '')
if (not topicstr):
self.msg('You have to define a topic!')
return
topicstrlist = topicstr.split(';')
(topicstr, aliases) = (topicstrlist[0], (topicstrlist[1:] if (len(topicstr) > 1) else []))
aliastxt = (('(aliases: %s)' % ', '.join(aliases)) if aliases else '')
old_entry = None
try:
for querystr in topicstrlist:
old_entry = HelpEntry.objects.find_topicmatch(querystr)
if old_entry:
old_entry = list(old_entry)[0]
break
category = (lhslist[1] if (nlist > 1) else old_entry.help_category)
lockstring = (','.join(lhslist[2:]) if (nlist > 2) else old_entry.locks.get())
except Exception:
old_entry = None
category = (lhslist[1] if (nlist > 1) else 'General')
lockstring = (','.join(lhslist[2:]) if (nlist > 2) else 'view:all()')
category = category.lower()
if ('edit' in switches):
if old_entry:
topicstr = old_entry.key
if self.rhs:
old_entry.entrytext += ('\n%s' % self.rhs)
helpentry = old_entry
else:
helpentry = create.create_help_entry(topicstr, self.rhs, category=category, locks=lockstring, aliases=aliases)
self.caller.db._editing_help = helpentry
EvEditor(self.caller, loadfunc=_loadhelp, savefunc=_savehelp, quitfunc=_quithelp, key='topic {}'.format(topicstr), persistent=True)
return
if (('append' in switches) or ('merge' in switches) or ('extend' in switches)):
if (not old_entry):
self.msg(("Could not find topic '%s'. You must give an exact name." % topicstr))
return
if (not self.rhs):
self.msg('You must supply text to append/merge.')
return
if ('merge' in switches):
old_entry.entrytext += (' ' + self.rhs)
else:
old_entry.entrytext += ('\n%s' % self.rhs)
old_entry.aliases.add(aliases)
self.msg(('Entry updated:\n%s%s' % (old_entry.entrytext, aliastxt)))
return
if (('delete' in switches) or ('del' in switches)):
if (not old_entry):
self.msg(("Could not find topic '%s'%s." % (topicstr, aliastxt)))
return
old_entry.delete()
self.msg(("Deleted help entry '%s'%s." % (topicstr, aliastxt)))
return
if (not self.rhs):
self.msg('You must supply a help text to add.')
return
if old_entry:
if ('replace' in switches):
old_entry.key = topicstr
old_entry.entrytext = self.rhs
old_entry.help_category = category
old_entry.locks.clear()
old_entry.locks.add(lockstring)
old_entry.aliases.add(aliases)
old_entry.save()
self.msg(("Overwrote the old topic '%s'%s." % (topicstr, aliastxt)))
else:
self.msg(("Topic '%s'%s already exists. Use /replace to overwrite or /append or /merge to add text to it." % (topicstr, aliastxt)))
else:
new_entry = create.create_help_entry(topicstr, self.rhs, category=category, locks=lockstring, aliases=aliases)
if new_entry:
self.msg(("Topic '%s'%s was successfully created." % (topicstr, aliastxt)))
if ('edit' in switches):
self.caller.db._editing_help = new_entry
EvEditor(self.caller, loadfunc=_loadhelp, savefunc=_savehelp, quitfunc=_quithelp, key='topic {}'.format(new_entry.key), persistent=True)
return
else:
self.msg(("Error when creating topic '%s'%s! Contact an admin." % (topicstr, aliastxt))) | Implement the function | evennia/commands/default/help.py | func | dscanlen/evennia | 1,544 | python | def func(self):
switches = self.switches
lhslist = self.lhslist
if (not self.args):
self.msg('Usage: sethelp[/switches] <topic>[;alias;alias][,category[,locks,..] = <text>')
return
nlist = len(lhslist)
topicstr = (lhslist[0] if (nlist > 0) else )
if (not topicstr):
self.msg('You have to define a topic!')
return
topicstrlist = topicstr.split(';')
(topicstr, aliases) = (topicstrlist[0], (topicstrlist[1:] if (len(topicstr) > 1) else []))
aliastxt = (('(aliases: %s)' % ', '.join(aliases)) if aliases else )
old_entry = None
try:
for querystr in topicstrlist:
old_entry = HelpEntry.objects.find_topicmatch(querystr)
if old_entry:
old_entry = list(old_entry)[0]
break
category = (lhslist[1] if (nlist > 1) else old_entry.help_category)
lockstring = (','.join(lhslist[2:]) if (nlist > 2) else old_entry.locks.get())
except Exception:
old_entry = None
category = (lhslist[1] if (nlist > 1) else 'General')
lockstring = (','.join(lhslist[2:]) if (nlist > 2) else 'view:all()')
category = category.lower()
if ('edit' in switches):
if old_entry:
topicstr = old_entry.key
if self.rhs:
old_entry.entrytext += ('\n%s' % self.rhs)
helpentry = old_entry
else:
helpentry = create.create_help_entry(topicstr, self.rhs, category=category, locks=lockstring, aliases=aliases)
self.caller.db._editing_help = helpentry
EvEditor(self.caller, loadfunc=_loadhelp, savefunc=_savehelp, quitfunc=_quithelp, key='topic {}'.format(topicstr), persistent=True)
return
if (('append' in switches) or ('merge' in switches) or ('extend' in switches)):
if (not old_entry):
self.msg(("Could not find topic '%s'. You must give an exact name." % topicstr))
return
if (not self.rhs):
self.msg('You must supply text to append/merge.')
return
if ('merge' in switches):
old_entry.entrytext += (' ' + self.rhs)
else:
old_entry.entrytext += ('\n%s' % self.rhs)
old_entry.aliases.add(aliases)
self.msg(('Entry updated:\n%s%s' % (old_entry.entrytext, aliastxt)))
return
if (('delete' in switches) or ('del' in switches)):
if (not old_entry):
self.msg(("Could not find topic '%s'%s." % (topicstr, aliastxt)))
return
old_entry.delete()
self.msg(("Deleted help entry '%s'%s." % (topicstr, aliastxt)))
return
if (not self.rhs):
self.msg('You must supply a help text to add.')
return
if old_entry:
if ('replace' in switches):
old_entry.key = topicstr
old_entry.entrytext = self.rhs
old_entry.help_category = category
old_entry.locks.clear()
old_entry.locks.add(lockstring)
old_entry.aliases.add(aliases)
old_entry.save()
self.msg(("Overwrote the old topic '%s'%s." % (topicstr, aliastxt)))
else:
self.msg(("Topic '%s'%s already exists. Use /replace to overwrite or /append or /merge to add text to it." % (topicstr, aliastxt)))
else:
new_entry = create.create_help_entry(topicstr, self.rhs, category=category, locks=lockstring, aliases=aliases)
if new_entry:
self.msg(("Topic '%s'%s was successfully created." % (topicstr, aliastxt)))
if ('edit' in switches):
self.caller.db._editing_help = new_entry
EvEditor(self.caller, loadfunc=_loadhelp, savefunc=_savehelp, quitfunc=_quithelp, key='topic {}'.format(new_entry.key), persistent=True)
return
else:
self.msg(("Error when creating topic '%s'%s! Contact an admin." % (topicstr, aliastxt))) | def func(self):
switches = self.switches
lhslist = self.lhslist
if (not self.args):
self.msg('Usage: sethelp[/switches] <topic>[;alias;alias][,category[,locks,..] = <text>')
return
nlist = len(lhslist)
topicstr = (lhslist[0] if (nlist > 0) else )
if (not topicstr):
self.msg('You have to define a topic!')
return
topicstrlist = topicstr.split(';')
(topicstr, aliases) = (topicstrlist[0], (topicstrlist[1:] if (len(topicstr) > 1) else []))
aliastxt = (('(aliases: %s)' % ', '.join(aliases)) if aliases else )
old_entry = None
try:
for querystr in topicstrlist:
old_entry = HelpEntry.objects.find_topicmatch(querystr)
if old_entry:
old_entry = list(old_entry)[0]
break
category = (lhslist[1] if (nlist > 1) else old_entry.help_category)
lockstring = (','.join(lhslist[2:]) if (nlist > 2) else old_entry.locks.get())
except Exception:
old_entry = None
category = (lhslist[1] if (nlist > 1) else 'General')
lockstring = (','.join(lhslist[2:]) if (nlist > 2) else 'view:all()')
category = category.lower()
if ('edit' in switches):
if old_entry:
topicstr = old_entry.key
if self.rhs:
old_entry.entrytext += ('\n%s' % self.rhs)
helpentry = old_entry
else:
helpentry = create.create_help_entry(topicstr, self.rhs, category=category, locks=lockstring, aliases=aliases)
self.caller.db._editing_help = helpentry
EvEditor(self.caller, loadfunc=_loadhelp, savefunc=_savehelp, quitfunc=_quithelp, key='topic {}'.format(topicstr), persistent=True)
return
if (('append' in switches) or ('merge' in switches) or ('extend' in switches)):
if (not old_entry):
self.msg(("Could not find topic '%s'. You must give an exact name." % topicstr))
return
if (not self.rhs):
self.msg('You must supply text to append/merge.')
return
if ('merge' in switches):
old_entry.entrytext += (' ' + self.rhs)
else:
old_entry.entrytext += ('\n%s' % self.rhs)
old_entry.aliases.add(aliases)
self.msg(('Entry updated:\n%s%s' % (old_entry.entrytext, aliastxt)))
return
if (('delete' in switches) or ('del' in switches)):
if (not old_entry):
self.msg(("Could not find topic '%s'%s." % (topicstr, aliastxt)))
return
old_entry.delete()
self.msg(("Deleted help entry '%s'%s." % (topicstr, aliastxt)))
return
if (not self.rhs):
self.msg('You must supply a help text to add.')
return
if old_entry:
if ('replace' in switches):
old_entry.key = topicstr
old_entry.entrytext = self.rhs
old_entry.help_category = category
old_entry.locks.clear()
old_entry.locks.add(lockstring)
old_entry.aliases.add(aliases)
old_entry.save()
self.msg(("Overwrote the old topic '%s'%s." % (topicstr, aliastxt)))
else:
self.msg(("Topic '%s'%s already exists. Use /replace to overwrite or /append or /merge to add text to it." % (topicstr, aliastxt)))
else:
new_entry = create.create_help_entry(topicstr, self.rhs, category=category, locks=lockstring, aliases=aliases)
if new_entry:
self.msg(("Topic '%s'%s was successfully created." % (topicstr, aliastxt)))
if ('edit' in switches):
self.caller.db._editing_help = new_entry
EvEditor(self.caller, loadfunc=_loadhelp, savefunc=_savehelp, quitfunc=_quithelp, key='topic {}'.format(new_entry.key), persistent=True)
return
else:
self.msg(("Error when creating topic '%s'%s! Contact an admin." % (topicstr, aliastxt)))<|docstring|>Implement the function<|endoftext|> |
258b8cae0379da947c2d0f5112b771a86dd87754c1343a922dbe5c83d291fc94 | def compress_image(img, w=128, h=128):
"''\n 缩略图\n "
img.thumbnail((w, h))
im.save((abspath + 'test1.png'), 'PNG')
print(u'成功保存为png格式, 压缩为128*128格式图片') | ''
缩略图 | util/WaterMarkUtils.py | compress_image | wpwbb510582246/buaa-daka | 17 | python | def compress_image(img, w=128, h=128):
"\n 缩略图\n "
img.thumbnail((w, h))
im.save((abspath + 'test1.png'), 'PNG')
print(u'成功保存为png格式, 压缩为128*128格式图片') | def compress_image(img, w=128, h=128):
"\n 缩略图\n "
img.thumbnail((w, h))
im.save((abspath + 'test1.png'), 'PNG')
print(u'成功保存为png格式, 压缩为128*128格式图片')<|docstring|>''
缩略图<|endoftext|> |
e98ed359a66cbe6ca91346cefc26c833250394af854132f3f273b374efca9ef2 | def cut_image(img):
"''\n 截图, 旋转,再粘贴\n "
(width, height) = img.size
box = ((width - 200), (height - 100), width, height)
region = img.crop(box)
region = region.transpose(Image.ROTATE_180)
img.paste(region, box)
img.save((abspath + '/test2.jpg'), 'JPEG')
print(u'重新拼图成功') | ''
截图, 旋转,再粘贴 | util/WaterMarkUtils.py | cut_image | wpwbb510582246/buaa-daka | 17 | python | def cut_image(img):
"\n 截图, 旋转,再粘贴\n "
(width, height) = img.size
box = ((width - 200), (height - 100), width, height)
region = img.crop(box)
region = region.transpose(Image.ROTATE_180)
img.paste(region, box)
img.save((abspath + '/test2.jpg'), 'JPEG')
print(u'重新拼图成功') | def cut_image(img):
"\n 截图, 旋转,再粘贴\n "
(width, height) = img.size
box = ((width - 200), (height - 100), width, height)
region = img.crop(box)
region = region.transpose(Image.ROTATE_180)
img.paste(region, box)
img.save((abspath + '/test2.jpg'), 'JPEG')
print(u'重新拼图成功')<|docstring|>''
截图, 旋转,再粘贴<|endoftext|> |
312fe1f66f591adb33ec18283a915886c37e47781e44dfa74c2dcdd39982ef43 | def logo_watermark(img, logo_path):
"''\n 添加一个图片水印,原理就是合并图层,用png比较好\n "
baseim = img
logoim = Image.open(logo_path)
(bw, bh) = baseim.size
(lw, lh) = logoim.size
baseim.paste(logoim, ((bw - lw), (bh - lh)))
baseim.save((abspath + '/test3.jpg'), 'JPEG')
print(u'logo水印组合成功') | ''
添加一个图片水印,原理就是合并图层,用png比较好 | util/WaterMarkUtils.py | logo_watermark | wpwbb510582246/buaa-daka | 17 | python | def logo_watermark(img, logo_path):
"\n 添加一个图片水印,原理就是合并图层,用png比较好\n "
baseim = img
logoim = Image.open(logo_path)
(bw, bh) = baseim.size
(lw, lh) = logoim.size
baseim.paste(logoim, ((bw - lw), (bh - lh)))
baseim.save((abspath + '/test3.jpg'), 'JPEG')
print(u'logo水印组合成功') | def logo_watermark(img, logo_path):
"\n 添加一个图片水印,原理就是合并图层,用png比较好\n "
baseim = img
logoim = Image.open(logo_path)
(bw, bh) = baseim.size
(lw, lh) = logoim.size
baseim.paste(logoim, ((bw - lw), (bh - lh)))
baseim.save((abspath + '/test3.jpg'), 'JPEG')
print(u'logo水印组合成功')<|docstring|>''
添加一个图片水印,原理就是合并图层,用png比较好<|endoftext|> |
03c94aac2943b27070b5b36f330719a0ab910ae1a3bbc96181bd5c47215a11e2 | def text_watermark(source_file, out_base_path='.', text='@Grayson', angle=23, opacity=0.5):
'\'\'\n 添加一个文字水印,做成透明水印的模样,应该是png图层合并\n http://www.pythoncentral.io/watermark-images-python-2x/\n 这里会产生著名的 ImportError("The _imagingft C module is not installed") 错误\n Pillow通过安装来解决 pip install Pillow\n '
img = Image.open(source_file)
watermark = Image.new('RGBA', img.size)
FONT = '../../fonts/NewYork.ttf'
size = 1
n_font = ImageFont.truetype(FONT, size)
(n_width, n_height) = n_font.getsize(text)
text_box = (min(watermark.size[0], watermark.size[1]) / 4)
print(text_box)
while ((n_width + n_height) < text_box):
size += 2
n_font = ImageFont.truetype(FONT, size=size)
(n_width, n_height) = n_font.getsize(text)
text_width = (watermark.size[0] - n_width)
text_height = (watermark.size[1] - n_height)
print(text_width, text_height)
draw = ImageDraw.Draw(watermark, 'RGBA')
delta_right = ((- text_width) / 9)
delta_bottom = ((- text_height) / 24)
draw.text(((delta_right + text_width), (delta_bottom + text_height)), text, font=n_font, fill='#ffffff')
alpha = watermark.split()[3]
alpha = ImageEnhance.Brightness(alpha).enhance(opacity)
watermark.putalpha(alpha)
str_uuid = ''.join(str(uuid.uuid4()).split('-'))
splits = source_file.split('.')
image_suffix = splits[(len(splits) - 1)]
out_file = ((((out_base_path + '/') + str_uuid) + '.') + image_suffix)
print(out_file)
Image.composite(watermark, img, watermark).save(out_file, image_suffix)
print(u'文字水印成功')
return str_uuid | ''
添加一个文字水印,做成透明水印的模样,应该是png图层合并
http://www.pythoncentral.io/watermark-images-python-2x/
这里会产生著名的 ImportError("The _imagingft C module is not installed") 错误
Pillow通过安装来解决 pip install Pillow | util/WaterMarkUtils.py | text_watermark | wpwbb510582246/buaa-daka | 17 | python | def text_watermark(source_file, out_base_path='.', text='@Grayson', angle=23, opacity=0.5):
'\'\'\n 添加一个文字水印,做成透明水印的模样,应该是png图层合并\n http://www.pythoncentral.io/watermark-images-python-2x/\n 这里会产生著名的 ImportError("The _imagingft C module is not installed") 错误\n Pillow通过安装来解决 pip install Pillow\n '
img = Image.open(source_file)
watermark = Image.new('RGBA', img.size)
FONT = '../../fonts/NewYork.ttf'
size = 1
n_font = ImageFont.truetype(FONT, size)
(n_width, n_height) = n_font.getsize(text)
text_box = (min(watermark.size[0], watermark.size[1]) / 4)
print(text_box)
while ((n_width + n_height) < text_box):
size += 2
n_font = ImageFont.truetype(FONT, size=size)
(n_width, n_height) = n_font.getsize(text)
text_width = (watermark.size[0] - n_width)
text_height = (watermark.size[1] - n_height)
print(text_width, text_height)
draw = ImageDraw.Draw(watermark, 'RGBA')
delta_right = ((- text_width) / 9)
delta_bottom = ((- text_height) / 24)
draw.text(((delta_right + text_width), (delta_bottom + text_height)), text, font=n_font, fill='#ffffff')
alpha = watermark.split()[3]
alpha = ImageEnhance.Brightness(alpha).enhance(opacity)
watermark.putalpha(alpha)
str_uuid = .join(str(uuid.uuid4()).split('-'))
splits = source_file.split('.')
image_suffix = splits[(len(splits) - 1)]
out_file = ((((out_base_path + '/') + str_uuid) + '.') + image_suffix)
print(out_file)
Image.composite(watermark, img, watermark).save(out_file, image_suffix)
print(u'文字水印成功')
return str_uuid | def text_watermark(source_file, out_base_path='.', text='@Grayson', angle=23, opacity=0.5):
'\'\'\n 添加一个文字水印,做成透明水印的模样,应该是png图层合并\n http://www.pythoncentral.io/watermark-images-python-2x/\n 这里会产生著名的 ImportError("The _imagingft C module is not installed") 错误\n Pillow通过安装来解决 pip install Pillow\n '
img = Image.open(source_file)
watermark = Image.new('RGBA', img.size)
FONT = '../../fonts/NewYork.ttf'
size = 1
n_font = ImageFont.truetype(FONT, size)
(n_width, n_height) = n_font.getsize(text)
text_box = (min(watermark.size[0], watermark.size[1]) / 4)
print(text_box)
while ((n_width + n_height) < text_box):
size += 2
n_font = ImageFont.truetype(FONT, size=size)
(n_width, n_height) = n_font.getsize(text)
text_width = (watermark.size[0] - n_width)
text_height = (watermark.size[1] - n_height)
print(text_width, text_height)
draw = ImageDraw.Draw(watermark, 'RGBA')
delta_right = ((- text_width) / 9)
delta_bottom = ((- text_height) / 24)
draw.text(((delta_right + text_width), (delta_bottom + text_height)), text, font=n_font, fill='#ffffff')
alpha = watermark.split()[3]
alpha = ImageEnhance.Brightness(alpha).enhance(opacity)
watermark.putalpha(alpha)
str_uuid = .join(str(uuid.uuid4()).split('-'))
splits = source_file.split('.')
image_suffix = splits[(len(splits) - 1)]
out_file = ((((out_base_path + '/') + str_uuid) + '.') + image_suffix)
print(out_file)
Image.composite(watermark, img, watermark).save(out_file, image_suffix)
print(u'文字水印成功')
return str_uuid<|docstring|>''
添加一个文字水印,做成透明水印的模样,应该是png图层合并
http://www.pythoncentral.io/watermark-images-python-2x/
这里会产生著名的 ImportError("The _imagingft C module is not installed") 错误
Pillow通过安装来解决 pip install Pillow<|endoftext|> |
8b138fe6d59b3ed02dada5b7a4bb072d8ee29a9de064f53371e3850cb9d72375 | def resizeImg(img, dst_w=0, dst_h=0, qua=85):
"''\n 只给了宽或者高,或者两个都给了,然后取比例合适的\n 如果图片比给要压缩的尺寸都要小,就不压缩了\n "
(ori_w, ori_h) = im.size
widthRatio = heightRatio = None
ratio = 1
if ((ori_w and (ori_w > dst_w)) or (ori_h and (ori_h > dst_h))):
if (dst_w and (ori_w > dst_w)):
widthRatio = (float(dst_w) / ori_w)
if (dst_h and (ori_h > dst_h)):
heightRatio = (float(dst_h) / ori_h)
if (widthRatio and heightRatio):
if (widthRatio < heightRatio):
ratio = widthRatio
else:
ratio = heightRatio
if (widthRatio and (not heightRatio)):
ratio = widthRatio
if (heightRatio and (not widthRatio)):
ratio = heightRatio
newWidth = int((ori_w * ratio))
newHeight = int((ori_h * ratio))
else:
newWidth = ori_w
newHeight = ori_h
im.resize((newWidth, newHeight), Image.ANTIALIAS).save('test5.jpg', 'JPEG', quality=qua)
print(u'等比压缩完成')
"'' \n Image.ANTIALIAS还有如下值: \n NEAREST: use nearest neighbour \n BILINEAR: linear interpolation in a 2x2 environment \n BICUBIC:cubic spline interpolation in a 4x4 environment \n ANTIALIAS:best down-sizing filter \n " | ''
只给了宽或者高,或者两个都给了,然后取比例合适的
如果图片比给要压缩的尺寸都要小,就不压缩了 | util/WaterMarkUtils.py | resizeImg | wpwbb510582246/buaa-daka | 17 | python | def resizeImg(img, dst_w=0, dst_h=0, qua=85):
"\n 只给了宽或者高,或者两个都给了,然后取比例合适的\n 如果图片比给要压缩的尺寸都要小,就不压缩了\n "
(ori_w, ori_h) = im.size
widthRatio = heightRatio = None
ratio = 1
if ((ori_w and (ori_w > dst_w)) or (ori_h and (ori_h > dst_h))):
if (dst_w and (ori_w > dst_w)):
widthRatio = (float(dst_w) / ori_w)
if (dst_h and (ori_h > dst_h)):
heightRatio = (float(dst_h) / ori_h)
if (widthRatio and heightRatio):
if (widthRatio < heightRatio):
ratio = widthRatio
else:
ratio = heightRatio
if (widthRatio and (not heightRatio)):
ratio = widthRatio
if (heightRatio and (not widthRatio)):
ratio = heightRatio
newWidth = int((ori_w * ratio))
newHeight = int((ori_h * ratio))
else:
newWidth = ori_w
newHeight = ori_h
im.resize((newWidth, newHeight), Image.ANTIALIAS).save('test5.jpg', 'JPEG', quality=qua)
print(u'等比压缩完成')
" \n Image.ANTIALIAS还有如下值: \n NEAREST: use nearest neighbour \n BILINEAR: linear interpolation in a 2x2 environment \n BICUBIC:cubic spline interpolation in a 4x4 environment \n ANTIALIAS:best down-sizing filter \n " | def resizeImg(img, dst_w=0, dst_h=0, qua=85):
"\n 只给了宽或者高,或者两个都给了,然后取比例合适的\n 如果图片比给要压缩的尺寸都要小,就不压缩了\n "
(ori_w, ori_h) = im.size
widthRatio = heightRatio = None
ratio = 1
if ((ori_w and (ori_w > dst_w)) or (ori_h and (ori_h > dst_h))):
if (dst_w and (ori_w > dst_w)):
widthRatio = (float(dst_w) / ori_w)
if (dst_h and (ori_h > dst_h)):
heightRatio = (float(dst_h) / ori_h)
if (widthRatio and heightRatio):
if (widthRatio < heightRatio):
ratio = widthRatio
else:
ratio = heightRatio
if (widthRatio and (not heightRatio)):
ratio = widthRatio
if (heightRatio and (not widthRatio)):
ratio = heightRatio
newWidth = int((ori_w * ratio))
newHeight = int((ori_h * ratio))
else:
newWidth = ori_w
newHeight = ori_h
im.resize((newWidth, newHeight), Image.ANTIALIAS).save('test5.jpg', 'JPEG', quality=qua)
print(u'等比压缩完成')
" \n Image.ANTIALIAS还有如下值: \n NEAREST: use nearest neighbour \n BILINEAR: linear interpolation in a 2x2 environment \n BICUBIC:cubic spline interpolation in a 4x4 environment \n ANTIALIAS:best down-sizing filter \n "<|docstring|>''
只给了宽或者高,或者两个都给了,然后取比例合适的
如果图片比给要压缩的尺寸都要小,就不压缩了<|endoftext|> |
f79778d7c6a0667b82175bc50796b3d87ea6df4b26573094affd9ecfda0a5400 | def clipResizeImg(im, dst_w, dst_h, qua=95):
"''\n 先按照一个比例对图片剪裁,然后在压缩到指定尺寸\n 一个图片 16:5 ,压缩为 2:1 并且宽为200,就要先把图片裁剪成 10:5,然后在等比压缩\n "
(ori_w, ori_h) = im.size
dst_scale = (float(dst_w) / dst_h)
ori_scale = (float(ori_w) / ori_h)
if (ori_scale <= dst_scale):
width = ori_w
height = int((width / dst_scale))
x = 0
y = ((ori_h - height) / 2)
else:
height = ori_h
width = int((height * dst_scale))
x = ((ori_w - width) / 2)
y = 0
box = (x, y, (width + x), (height + y))
newIm = im.crop(box)
im = None
ratio = (float(dst_w) / width)
newWidth = int((width * ratio))
newHeight = int((height * ratio))
newIm.resize((newWidth, newHeight), Image.ANTIALIAS).save((abspath + '/test6.jpg'), 'JPEG', quality=95)
print(('old size %s %s' % (ori_w, ori_h)))
print(('new size %s %s' % (newWidth, newHeight)))
print(u'剪裁后等比压缩完成') | ''
先按照一个比例对图片剪裁,然后在压缩到指定尺寸
一个图片 16:5 ,压缩为 2:1 并且宽为200,就要先把图片裁剪成 10:5,然后在等比压缩 | util/WaterMarkUtils.py | clipResizeImg | wpwbb510582246/buaa-daka | 17 | python | def clipResizeImg(im, dst_w, dst_h, qua=95):
"\n 先按照一个比例对图片剪裁,然后在压缩到指定尺寸\n 一个图片 16:5 ,压缩为 2:1 并且宽为200,就要先把图片裁剪成 10:5,然后在等比压缩\n "
(ori_w, ori_h) = im.size
dst_scale = (float(dst_w) / dst_h)
ori_scale = (float(ori_w) / ori_h)
if (ori_scale <= dst_scale):
width = ori_w
height = int((width / dst_scale))
x = 0
y = ((ori_h - height) / 2)
else:
height = ori_h
width = int((height * dst_scale))
x = ((ori_w - width) / 2)
y = 0
box = (x, y, (width + x), (height + y))
newIm = im.crop(box)
im = None
ratio = (float(dst_w) / width)
newWidth = int((width * ratio))
newHeight = int((height * ratio))
newIm.resize((newWidth, newHeight), Image.ANTIALIAS).save((abspath + '/test6.jpg'), 'JPEG', quality=95)
print(('old size %s %s' % (ori_w, ori_h)))
print(('new size %s %s' % (newWidth, newHeight)))
print(u'剪裁后等比压缩完成') | def clipResizeImg(im, dst_w, dst_h, qua=95):
"\n 先按照一个比例对图片剪裁,然后在压缩到指定尺寸\n 一个图片 16:5 ,压缩为 2:1 并且宽为200,就要先把图片裁剪成 10:5,然后在等比压缩\n "
(ori_w, ori_h) = im.size
dst_scale = (float(dst_w) / dst_h)
ori_scale = (float(ori_w) / ori_h)
if (ori_scale <= dst_scale):
width = ori_w
height = int((width / dst_scale))
x = 0
y = ((ori_h - height) / 2)
else:
height = ori_h
width = int((height * dst_scale))
x = ((ori_w - width) / 2)
y = 0
box = (x, y, (width + x), (height + y))
newIm = im.crop(box)
im = None
ratio = (float(dst_w) / width)
newWidth = int((width * ratio))
newHeight = int((height * ratio))
newIm.resize((newWidth, newHeight), Image.ANTIALIAS).save((abspath + '/test6.jpg'), 'JPEG', quality=95)
print(('old size %s %s' % (ori_w, ori_h)))
print(('new size %s %s' % (newWidth, newHeight)))
print(u'剪裁后等比压缩完成')<|docstring|>''
先按照一个比例对图片剪裁,然后在压缩到指定尺寸
一个图片 16:5 ,压缩为 2:1 并且宽为200,就要先把图片裁剪成 10:5,然后在等比压缩<|endoftext|> |
eaaae998fb23764ba09d1ee91e0a0b7c219975f22f266f1854e7cfcbd9e9d25f | def _perform_makemigrations(directory=None, message=None, sql=False, head='head', splice=False, branch_label=None, version_path=None, rev_id=None, x_arg=None):
"Alias for 'revision --autogenerate'"
config = current_app.extensions['migrate'].migrate.get_config(directory, opts=['autogenerate'], x_arg=x_arg)
command.revision(config, message, autogenerate=True, sql=sql, head=head, splice=splice, branch_label=branch_label, version_path=version_path, rev_id=rev_id) | Alias for 'revision --autogenerate' | navycut/orm/sqla/migrator/__init__.py | _perform_makemigrations | navycut/navycut | 13 | python | def _perform_makemigrations(directory=None, message=None, sql=False, head='head', splice=False, branch_label=None, version_path=None, rev_id=None, x_arg=None):
config = current_app.extensions['migrate'].migrate.get_config(directory, opts=['autogenerate'], x_arg=x_arg)
command.revision(config, message, autogenerate=True, sql=sql, head=head, splice=splice, branch_label=branch_label, version_path=version_path, rev_id=rev_id) | def _perform_makemigrations(directory=None, message=None, sql=False, head='head', splice=False, branch_label=None, version_path=None, rev_id=None, x_arg=None):
config = current_app.extensions['migrate'].migrate.get_config(directory, opts=['autogenerate'], x_arg=x_arg)
command.revision(config, message, autogenerate=True, sql=sql, head=head, splice=splice, branch_label=branch_label, version_path=version_path, rev_id=rev_id)<|docstring|>Alias for 'revision --autogenerate'<|endoftext|> |
297c1725f1fbb25f3f75731e8251b4367127cd18f2141a9cf42ddf698b4695c8 | def __init__(self, cluster_name=None, network_config=None, node_configs=None, encryption_config=None, metadata_fault_tolerance=None):
'Constructor for the CreateVirtualClusterParameters class'
self.cluster_name = cluster_name
self.encryption_config = encryption_config
self.metadata_fault_tolerance = metadata_fault_tolerance
self.network_config = network_config
self.node_configs = node_configs | Constructor for the CreateVirtualClusterParameters class | cohesity_management_sdk/models/create_virtual_cluster_parameters.py | __init__ | pyashish/management-sdk-python | 1 | python | def __init__(self, cluster_name=None, network_config=None, node_configs=None, encryption_config=None, metadata_fault_tolerance=None):
self.cluster_name = cluster_name
self.encryption_config = encryption_config
self.metadata_fault_tolerance = metadata_fault_tolerance
self.network_config = network_config
self.node_configs = node_configs | def __init__(self, cluster_name=None, network_config=None, node_configs=None, encryption_config=None, metadata_fault_tolerance=None):
self.cluster_name = cluster_name
self.encryption_config = encryption_config
self.metadata_fault_tolerance = metadata_fault_tolerance
self.network_config = network_config
self.node_configs = node_configs<|docstring|>Constructor for the CreateVirtualClusterParameters class<|endoftext|> |
30a867a87149fc4a3f051a8902e5f0a318b5fa03714a6476c60707ea63ea138a | @classmethod
def from_dictionary(cls, dictionary):
"Creates an instance of this model from a dictionary\n\n Args:\n dictionary (dictionary): A dictionary representation of the object as\n obtained from the deserialization of the server's response. The keys\n MUST match property names in the API description.\n\n Returns:\n object: An instance of this structure class.\n\n "
if (dictionary is None):
return None
cluster_name = dictionary.get('clusterName')
network_config = (cohesity_management_sdk.models.network_configuration.NetworkConfiguration.from_dictionary(dictionary.get('networkConfig')) if dictionary.get('networkConfig') else None)
node_configs = None
if (dictionary.get('nodeConfigs') != None):
node_configs = list()
for structure in dictionary.get('nodeConfigs'):
node_configs.append(cohesity_management_sdk.models.virtual_node_configuration.VirtualNodeConfiguration.from_dictionary(structure))
encryption_config = (cohesity_management_sdk.models.encryption_configuration.EncryptionConfiguration.from_dictionary(dictionary.get('encryptionConfig')) if dictionary.get('encryptionConfig') else None)
metadata_fault_tolerance = dictionary.get('metadataFaultTolerance')
return cls(cluster_name, network_config, node_configs, encryption_config, metadata_fault_tolerance) | Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object as
obtained from the deserialization of the server's response. The keys
MUST match property names in the API description.
Returns:
object: An instance of this structure class. | cohesity_management_sdk/models/create_virtual_cluster_parameters.py | from_dictionary | pyashish/management-sdk-python | 1 | python | @classmethod
def from_dictionary(cls, dictionary):
"Creates an instance of this model from a dictionary\n\n Args:\n dictionary (dictionary): A dictionary representation of the object as\n obtained from the deserialization of the server's response. The keys\n MUST match property names in the API description.\n\n Returns:\n object: An instance of this structure class.\n\n "
if (dictionary is None):
return None
cluster_name = dictionary.get('clusterName')
network_config = (cohesity_management_sdk.models.network_configuration.NetworkConfiguration.from_dictionary(dictionary.get('networkConfig')) if dictionary.get('networkConfig') else None)
node_configs = None
if (dictionary.get('nodeConfigs') != None):
node_configs = list()
for structure in dictionary.get('nodeConfigs'):
node_configs.append(cohesity_management_sdk.models.virtual_node_configuration.VirtualNodeConfiguration.from_dictionary(structure))
encryption_config = (cohesity_management_sdk.models.encryption_configuration.EncryptionConfiguration.from_dictionary(dictionary.get('encryptionConfig')) if dictionary.get('encryptionConfig') else None)
metadata_fault_tolerance = dictionary.get('metadataFaultTolerance')
return cls(cluster_name, network_config, node_configs, encryption_config, metadata_fault_tolerance) | @classmethod
def from_dictionary(cls, dictionary):
"Creates an instance of this model from a dictionary\n\n Args:\n dictionary (dictionary): A dictionary representation of the object as\n obtained from the deserialization of the server's response. The keys\n MUST match property names in the API description.\n\n Returns:\n object: An instance of this structure class.\n\n "
if (dictionary is None):
return None
cluster_name = dictionary.get('clusterName')
network_config = (cohesity_management_sdk.models.network_configuration.NetworkConfiguration.from_dictionary(dictionary.get('networkConfig')) if dictionary.get('networkConfig') else None)
node_configs = None
if (dictionary.get('nodeConfigs') != None):
node_configs = list()
for structure in dictionary.get('nodeConfigs'):
node_configs.append(cohesity_management_sdk.models.virtual_node_configuration.VirtualNodeConfiguration.from_dictionary(structure))
encryption_config = (cohesity_management_sdk.models.encryption_configuration.EncryptionConfiguration.from_dictionary(dictionary.get('encryptionConfig')) if dictionary.get('encryptionConfig') else None)
metadata_fault_tolerance = dictionary.get('metadataFaultTolerance')
return cls(cluster_name, network_config, node_configs, encryption_config, metadata_fault_tolerance)<|docstring|>Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object as
obtained from the deserialization of the server's response. The keys
MUST match property names in the API description.
Returns:
object: An instance of this structure class.<|endoftext|> |
b6cffe6977a41b8affd2f32f7376219097dde727542ce6dbe75a3465ccdea299 | def benchmark_using_loadgen():
'Perform the benchmark using python API for the LoadGen library'
scenario = {'SingleStream': lg.TestScenario.SingleStream, 'MultiStream': lg.TestScenario.MultiStream, 'Server': lg.TestScenario.Server, 'Offline': lg.TestScenario.Offline}[LOADGEN_SCENARIO]
mode = {'AccuracyOnly': lg.TestMode.AccuracyOnly, 'PerformanceOnly': lg.TestMode.PerformanceOnly, 'SubmissionRun': lg.TestMode.SubmissionRun}[LOADGEN_MODE]
ts = lg.TestSettings()
ts.FromConfig(MLPERF_CONF_PATH, MODEL_NAME, LOADGEN_SCENARIO)
ts.FromConfig(USER_CONF_PATH, MODEL_NAME, LOADGEN_SCENARIO)
ts.scenario = scenario
ts.mode = mode
sut = lg.ConstructSUT(issue_queries, flush_queries, process_latencies)
qsl = lg.ConstructQSL(LOADGEN_DATASET_SIZE, LOADGEN_BUFFER_SIZE, load_query_samples, unload_query_samples)
log_settings = lg.LogSettings()
log_settings.enable_trace = False
lg.StartTestWithLogSettings(sut, qsl, ts, log_settings)
lg.DestroyQSL(qsl)
lg.DestroySUT(sut) | Perform the benchmark using python API for the LoadGen library | program/example-loadgen-py/env_controlled_loadgen_example.py | benchmark_using_loadgen | kaidrake/ck-mlperf | 6 | python | def benchmark_using_loadgen():
scenario = {'SingleStream': lg.TestScenario.SingleStream, 'MultiStream': lg.TestScenario.MultiStream, 'Server': lg.TestScenario.Server, 'Offline': lg.TestScenario.Offline}[LOADGEN_SCENARIO]
mode = {'AccuracyOnly': lg.TestMode.AccuracyOnly, 'PerformanceOnly': lg.TestMode.PerformanceOnly, 'SubmissionRun': lg.TestMode.SubmissionRun}[LOADGEN_MODE]
ts = lg.TestSettings()
ts.FromConfig(MLPERF_CONF_PATH, MODEL_NAME, LOADGEN_SCENARIO)
ts.FromConfig(USER_CONF_PATH, MODEL_NAME, LOADGEN_SCENARIO)
ts.scenario = scenario
ts.mode = mode
sut = lg.ConstructSUT(issue_queries, flush_queries, process_latencies)
qsl = lg.ConstructQSL(LOADGEN_DATASET_SIZE, LOADGEN_BUFFER_SIZE, load_query_samples, unload_query_samples)
log_settings = lg.LogSettings()
log_settings.enable_trace = False
lg.StartTestWithLogSettings(sut, qsl, ts, log_settings)
lg.DestroyQSL(qsl)
lg.DestroySUT(sut) | def benchmark_using_loadgen():
scenario = {'SingleStream': lg.TestScenario.SingleStream, 'MultiStream': lg.TestScenario.MultiStream, 'Server': lg.TestScenario.Server, 'Offline': lg.TestScenario.Offline}[LOADGEN_SCENARIO]
mode = {'AccuracyOnly': lg.TestMode.AccuracyOnly, 'PerformanceOnly': lg.TestMode.PerformanceOnly, 'SubmissionRun': lg.TestMode.SubmissionRun}[LOADGEN_MODE]
ts = lg.TestSettings()
ts.FromConfig(MLPERF_CONF_PATH, MODEL_NAME, LOADGEN_SCENARIO)
ts.FromConfig(USER_CONF_PATH, MODEL_NAME, LOADGEN_SCENARIO)
ts.scenario = scenario
ts.mode = mode
sut = lg.ConstructSUT(issue_queries, flush_queries, process_latencies)
qsl = lg.ConstructQSL(LOADGEN_DATASET_SIZE, LOADGEN_BUFFER_SIZE, load_query_samples, unload_query_samples)
log_settings = lg.LogSettings()
log_settings.enable_trace = False
lg.StartTestWithLogSettings(sut, qsl, ts, log_settings)
lg.DestroyQSL(qsl)
lg.DestroySUT(sut)<|docstring|>Perform the benchmark using python API for the LoadGen library<|endoftext|> |
269ff585a938d2fe2c5a4b2e08d2e58f8a115543dc98e9c6385f9acef572b956 | def _make_modules(entropy, add_sub_entropy):
'Returns modules given "difficulty" parameters.'
sample_args_pure = composition.PreSampleArgs(1, 1, *entropy)
add_sub_sample_args_pure = composition.PreSampleArgs(1, 1, *add_sub_entropy)
return {'add_or_sub': functools.partial(add_or_sub, None, add_sub_sample_args_pure), 'add_sub_multiple': functools.partial(add_sub_multiple, _INT, sample_args_pure), 'add_or_sub_in_base': functools.partial(add_or_sub_in_base, sample_args_pure), 'mul': functools.partial(mul, None, sample_args_pure), 'div': functools.partial(div, None, sample_args_pure), 'mul_div_multiple': functools.partial(mul_div_multiple, _INT_OR_RATIONAL, sample_args_pure), 'mixed': functools.partial(mixed, _INT_OR_RATIONAL, sample_args_pure), 'nearest_integer_root': functools.partial(nearest_integer_root, sample_args_pure), 'simplify_surd': functools.partial(simplify_surd, None, sample_args_pure)} | Returns modules given "difficulty" parameters. | mathematics_dataset/modules/arithmetic.py | _make_modules | Wikidepia/mathematics_dataset_id | 0 | python | def _make_modules(entropy, add_sub_entropy):
sample_args_pure = composition.PreSampleArgs(1, 1, *entropy)
add_sub_sample_args_pure = composition.PreSampleArgs(1, 1, *add_sub_entropy)
return {'add_or_sub': functools.partial(add_or_sub, None, add_sub_sample_args_pure), 'add_sub_multiple': functools.partial(add_sub_multiple, _INT, sample_args_pure), 'add_or_sub_in_base': functools.partial(add_or_sub_in_base, sample_args_pure), 'mul': functools.partial(mul, None, sample_args_pure), 'div': functools.partial(div, None, sample_args_pure), 'mul_div_multiple': functools.partial(mul_div_multiple, _INT_OR_RATIONAL, sample_args_pure), 'mixed': functools.partial(mixed, _INT_OR_RATIONAL, sample_args_pure), 'nearest_integer_root': functools.partial(nearest_integer_root, sample_args_pure), 'simplify_surd': functools.partial(simplify_surd, None, sample_args_pure)} | def _make_modules(entropy, add_sub_entropy):
sample_args_pure = composition.PreSampleArgs(1, 1, *entropy)
add_sub_sample_args_pure = composition.PreSampleArgs(1, 1, *add_sub_entropy)
return {'add_or_sub': functools.partial(add_or_sub, None, add_sub_sample_args_pure), 'add_sub_multiple': functools.partial(add_sub_multiple, _INT, sample_args_pure), 'add_or_sub_in_base': functools.partial(add_or_sub_in_base, sample_args_pure), 'mul': functools.partial(mul, None, sample_args_pure), 'div': functools.partial(div, None, sample_args_pure), 'mul_div_multiple': functools.partial(mul_div_multiple, _INT_OR_RATIONAL, sample_args_pure), 'mixed': functools.partial(mixed, _INT_OR_RATIONAL, sample_args_pure), 'nearest_integer_root': functools.partial(nearest_integer_root, sample_args_pure), 'simplify_surd': functools.partial(simplify_surd, None, sample_args_pure)}<|docstring|>Returns modules given "difficulty" parameters.<|endoftext|> |
a90a9689862025d92ce99f6aacd972d733535d588f010f9a4c6fa9d023763ad5 | def train(entropy_fn):
'Returns dict of training modules.'
return _make_modules(entropy=entropy_fn(_ENTROPY_TRAIN), add_sub_entropy=entropy_fn(_ADD_SUB_ENTROPY_TRAIN)) | Returns dict of training modules. | mathematics_dataset/modules/arithmetic.py | train | Wikidepia/mathematics_dataset_id | 0 | python | def train(entropy_fn):
return _make_modules(entropy=entropy_fn(_ENTROPY_TRAIN), add_sub_entropy=entropy_fn(_ADD_SUB_ENTROPY_TRAIN)) | def train(entropy_fn):
return _make_modules(entropy=entropy_fn(_ENTROPY_TRAIN), add_sub_entropy=entropy_fn(_ADD_SUB_ENTROPY_TRAIN))<|docstring|>Returns dict of training modules.<|endoftext|> |
badc8c9eacb16436a627060ccb61854abfd00e797015414b889e96350161f1e0 | def test():
'Returns dict of testing modules.'
return _make_modules(entropy=_ENTROPY_INTERPOLATE, add_sub_entropy=_ADD_SUB_ENTROPY_INTERPOLATE) | Returns dict of testing modules. | mathematics_dataset/modules/arithmetic.py | test | Wikidepia/mathematics_dataset_id | 0 | python | def test():
return _make_modules(entropy=_ENTROPY_INTERPOLATE, add_sub_entropy=_ADD_SUB_ENTROPY_INTERPOLATE) | def test():
return _make_modules(entropy=_ENTROPY_INTERPOLATE, add_sub_entropy=_ADD_SUB_ENTROPY_INTERPOLATE)<|docstring|>Returns dict of testing modules.<|endoftext|> |
a733b29444ff4c89082226b750847f001b3f5fa24561ea1f15bf046682e3313c | def test_extra():
'Returns dict of extrapolation testing modules.'
sample_args_pure = composition.PreSampleArgs(1, 1, *_ENTROPY_EXTRAPOLATE)
add_sub_sample_args_pure = composition.PreSampleArgs(1, 1, *_ADD_SUB_ENTROPY_EXTRAPOLATE)
train_length = arithmetic.length_range_for_entropy(_ENTROPY_TRAIN[1])[1]
def extrapolate_length():
return random.randint((train_length + 1), (train_length + _EXTRAPOLATE_EXTRA_LENGTH))
def add_sub_multiple_longer():
return add_sub_multiple(_INT, sample_args_pure, length=extrapolate_length())
def mul_div_multiple_longer():
return mul_div_multiple(_INT, sample_args_pure, length=extrapolate_length())
def mixed_longer():
return mixed(_INT, sample_args_pure, length=extrapolate_length())
return {'add_or_sub_big': functools.partial(add_or_sub, None, add_sub_sample_args_pure), 'mul_big': functools.partial(mul, None, sample_args_pure), 'div_big': functools.partial(div, None, sample_args_pure), 'add_sub_multiple_longer': add_sub_multiple_longer, 'mul_div_multiple_longer': mul_div_multiple_longer, 'mixed_longer': mixed_longer} | Returns dict of extrapolation testing modules. | mathematics_dataset/modules/arithmetic.py | test_extra | Wikidepia/mathematics_dataset_id | 0 | python | def test_extra():
sample_args_pure = composition.PreSampleArgs(1, 1, *_ENTROPY_EXTRAPOLATE)
add_sub_sample_args_pure = composition.PreSampleArgs(1, 1, *_ADD_SUB_ENTROPY_EXTRAPOLATE)
train_length = arithmetic.length_range_for_entropy(_ENTROPY_TRAIN[1])[1]
def extrapolate_length():
return random.randint((train_length + 1), (train_length + _EXTRAPOLATE_EXTRA_LENGTH))
def add_sub_multiple_longer():
return add_sub_multiple(_INT, sample_args_pure, length=extrapolate_length())
def mul_div_multiple_longer():
return mul_div_multiple(_INT, sample_args_pure, length=extrapolate_length())
def mixed_longer():
return mixed(_INT, sample_args_pure, length=extrapolate_length())
return {'add_or_sub_big': functools.partial(add_or_sub, None, add_sub_sample_args_pure), 'mul_big': functools.partial(mul, None, sample_args_pure), 'div_big': functools.partial(div, None, sample_args_pure), 'add_sub_multiple_longer': add_sub_multiple_longer, 'mul_div_multiple_longer': mul_div_multiple_longer, 'mixed_longer': mixed_longer} | def test_extra():
sample_args_pure = composition.PreSampleArgs(1, 1, *_ENTROPY_EXTRAPOLATE)
add_sub_sample_args_pure = composition.PreSampleArgs(1, 1, *_ADD_SUB_ENTROPY_EXTRAPOLATE)
train_length = arithmetic.length_range_for_entropy(_ENTROPY_TRAIN[1])[1]
def extrapolate_length():
return random.randint((train_length + 1), (train_length + _EXTRAPOLATE_EXTRA_LENGTH))
def add_sub_multiple_longer():
return add_sub_multiple(_INT, sample_args_pure, length=extrapolate_length())
def mul_div_multiple_longer():
return mul_div_multiple(_INT, sample_args_pure, length=extrapolate_length())
def mixed_longer():
return mixed(_INT, sample_args_pure, length=extrapolate_length())
return {'add_or_sub_big': functools.partial(add_or_sub, None, add_sub_sample_args_pure), 'mul_big': functools.partial(mul, None, sample_args_pure), 'div_big': functools.partial(div, None, sample_args_pure), 'add_sub_multiple_longer': add_sub_multiple_longer, 'mul_div_multiple_longer': mul_div_multiple_longer, 'mixed_longer': mixed_longer}<|docstring|>Returns dict of extrapolation testing modules.<|endoftext|> |
f9dbefa752df1894d02f9d87468e80fd4bd95bbcf5bcbc40f17ec84ea7c2fc8b | def _value_sampler(value):
'Returns sampler (e.g., number.integer) appropriate for `value`.'
if ((value == _INT) or number.is_integer(value)):
return functools.partial(number.integer, signed=True)
if ((value == _INT_OR_RATIONAL) or isinstance(value, sympy.Rational)):
return functools.partial(number.integer_or_rational, signed=True)
if isinstance(value, display.Decimal):
return functools.partial(number.integer_or_decimal, signed=True)
raise ValueError('Unrecognized value {} of type {}'.format(value, type(value))) | Returns sampler (e.g., number.integer) appropriate for `value`. | mathematics_dataset/modules/arithmetic.py | _value_sampler | Wikidepia/mathematics_dataset_id | 0 | python | def _value_sampler(value):
if ((value == _INT) or number.is_integer(value)):
return functools.partial(number.integer, signed=True)
if ((value == _INT_OR_RATIONAL) or isinstance(value, sympy.Rational)):
return functools.partial(number.integer_or_rational, signed=True)
if isinstance(value, display.Decimal):
return functools.partial(number.integer_or_decimal, signed=True)
raise ValueError('Unrecognized value {} of type {}'.format(value, type(value))) | def _value_sampler(value):
if ((value == _INT) or number.is_integer(value)):
return functools.partial(number.integer, signed=True)
if ((value == _INT_OR_RATIONAL) or isinstance(value, sympy.Rational)):
return functools.partial(number.integer_or_rational, signed=True)
if isinstance(value, display.Decimal):
return functools.partial(number.integer_or_decimal, signed=True)
raise ValueError('Unrecognized value {} of type {}'.format(value, type(value)))<|docstring|>Returns sampler (e.g., number.integer) appropriate for `value`.<|endoftext|> |
94931f6c12a616411420ca500800b37a03b69e9439d2017c996f66b81e70d31c | def _add_question_or_entity(context, p, q, is_question):
'Generates entity or question for adding p + q.'
value = (p.value + q.value)
if is_question:
template = random.choice(['{p} + {q}', '{p}+{q}', 'Kerjakan {p} + {q}.', 'Tambah {p} dan {q}.', 'Gabungkan {p} dan {q}.', 'Jumlah {p} dan {q}.', 'Total dari {p} dan {q}.', 'Tambahkan bersama {p} and {q}.', 'Berapa {p} ditambah {q}?', 'Hitung {p} + {q}.', 'Berapa {p} + {q}?'])
return example.Problem(question=example.question(context, template, p=p, q=q), answer=value)
else:
return composition.Entity(context=context, value=value, description='Misalkan {self} = {p} + {q}.', p=p, q=q) | Generates entity or question for adding p + q. | mathematics_dataset/modules/arithmetic.py | _add_question_or_entity | Wikidepia/mathematics_dataset_id | 0 | python | def _add_question_or_entity(context, p, q, is_question):
value = (p.value + q.value)
if is_question:
template = random.choice(['{p} + {q}', '{p}+{q}', 'Kerjakan {p} + {q}.', 'Tambah {p} dan {q}.', 'Gabungkan {p} dan {q}.', 'Jumlah {p} dan {q}.', 'Total dari {p} dan {q}.', 'Tambahkan bersama {p} and {q}.', 'Berapa {p} ditambah {q}?', 'Hitung {p} + {q}.', 'Berapa {p} + {q}?'])
return example.Problem(question=example.question(context, template, p=p, q=q), answer=value)
else:
return composition.Entity(context=context, value=value, description='Misalkan {self} = {p} + {q}.', p=p, q=q) | def _add_question_or_entity(context, p, q, is_question):
value = (p.value + q.value)
if is_question:
template = random.choice(['{p} + {q}', '{p}+{q}', 'Kerjakan {p} + {q}.', 'Tambah {p} dan {q}.', 'Gabungkan {p} dan {q}.', 'Jumlah {p} dan {q}.', 'Total dari {p} dan {q}.', 'Tambahkan bersama {p} and {q}.', 'Berapa {p} ditambah {q}?', 'Hitung {p} + {q}.', 'Berapa {p} + {q}?'])
return example.Problem(question=example.question(context, template, p=p, q=q), answer=value)
else:
return composition.Entity(context=context, value=value, description='Misalkan {self} = {p} + {q}.', p=p, q=q)<|docstring|>Generates entity or question for adding p + q.<|endoftext|> |
1393f1f6c9567c1331792ced281d7ffe3ccb0cb25a9b04dd69080de8de60ab9e | def _sub_question_or_entity(context, p, q, is_question):
'Generates entity or question for subtraction p - q.'
value = (p.value - q.value)
if is_question:
templates = ['{p} - {q}', 'Kerjakan {p} - {q}.', 'Berapakah {p} dikurangi {q}?', 'Berapakah {p} diambil {q}?', 'Berapakah {q} kurang dari {p}?', 'Kurangi {q} dari {p}.', 'Hitunglah {p} - {q}.', 'Berapakah {p} - {q}?']
if sympy.Ge(p.value, q.value):
for adjective in ['distance', 'difference']:
for pair in ['{p} dan {q}', '{q} dan {p}']:
templates.append('Berapa {} antara {}?'.format(adjective, pair))
template = random.choice(templates)
return example.Problem(question=example.question(context, template, p=p, q=q), answer=value)
else:
return composition.Entity(context=context, value=value, description='Misalkan {self} = {p} - {q}.', p=p, q=q) | Generates entity or question for subtraction p - q. | mathematics_dataset/modules/arithmetic.py | _sub_question_or_entity | Wikidepia/mathematics_dataset_id | 0 | python | def _sub_question_or_entity(context, p, q, is_question):
value = (p.value - q.value)
if is_question:
templates = ['{p} - {q}', 'Kerjakan {p} - {q}.', 'Berapakah {p} dikurangi {q}?', 'Berapakah {p} diambil {q}?', 'Berapakah {q} kurang dari {p}?', 'Kurangi {q} dari {p}.', 'Hitunglah {p} - {q}.', 'Berapakah {p} - {q}?']
if sympy.Ge(p.value, q.value):
for adjective in ['distance', 'difference']:
for pair in ['{p} dan {q}', '{q} dan {p}']:
templates.append('Berapa {} antara {}?'.format(adjective, pair))
template = random.choice(templates)
return example.Problem(question=example.question(context, template, p=p, q=q), answer=value)
else:
return composition.Entity(context=context, value=value, description='Misalkan {self} = {p} - {q}.', p=p, q=q) | def _sub_question_or_entity(context, p, q, is_question):
value = (p.value - q.value)
if is_question:
templates = ['{p} - {q}', 'Kerjakan {p} - {q}.', 'Berapakah {p} dikurangi {q}?', 'Berapakah {p} diambil {q}?', 'Berapakah {q} kurang dari {p}?', 'Kurangi {q} dari {p}.', 'Hitunglah {p} - {q}.', 'Berapakah {p} - {q}?']
if sympy.Ge(p.value, q.value):
for adjective in ['distance', 'difference']:
for pair in ['{p} dan {q}', '{q} dan {p}']:
templates.append('Berapa {} antara {}?'.format(adjective, pair))
template = random.choice(templates)
return example.Problem(question=example.question(context, template, p=p, q=q), answer=value)
else:
return composition.Entity(context=context, value=value, description='Misalkan {self} = {p} - {q}.', p=p, q=q)<|docstring|>Generates entity or question for subtraction p - q.<|endoftext|> |
48be2517eab99f108470a115e28830d756b0256b7345ab79fc5983367437c8d1 | @composition.module(number.is_integer_or_rational_or_decimal)
def add_or_sub(value, sample_args, context=None):
'Module for adding or subtracting two values.'
is_question = (context is None)
if (context is None):
context = composition.Context()
is_addition = random.choice([False, True])
(entropy, sample_args) = sample_args.peel()
if (value is None):
(entropy_p, entropy_q) = _entropy_for_pair(entropy)
p = number.integer_or_decimal(entropy_p, signed=True)
q = number.integer_or_decimal(entropy_q, signed=True)
else:
entropy = max(entropy, number.entropy_of_value(value))
sampler = _value_sampler(value)
p = sampler(entropy)
if is_addition:
q = (value - p)
if random.choice([False, True]):
(p, q) = (q, p)
else:
q = (p - value)
if random.choice([False, True]):
(p, q) = ((- q), (- p))
(p, q) = context.sample(sample_args, [p, q])
if is_addition:
return _add_question_or_entity(context, p, q, is_question)
else:
return _sub_question_or_entity(context, p, q, is_question) | Module for adding or subtracting two values. | mathematics_dataset/modules/arithmetic.py | add_or_sub | Wikidepia/mathematics_dataset_id | 0 | python | @composition.module(number.is_integer_or_rational_or_decimal)
def add_or_sub(value, sample_args, context=None):
is_question = (context is None)
if (context is None):
context = composition.Context()
is_addition = random.choice([False, True])
(entropy, sample_args) = sample_args.peel()
if (value is None):
(entropy_p, entropy_q) = _entropy_for_pair(entropy)
p = number.integer_or_decimal(entropy_p, signed=True)
q = number.integer_or_decimal(entropy_q, signed=True)
else:
entropy = max(entropy, number.entropy_of_value(value))
sampler = _value_sampler(value)
p = sampler(entropy)
if is_addition:
q = (value - p)
if random.choice([False, True]):
(p, q) = (q, p)
else:
q = (p - value)
if random.choice([False, True]):
(p, q) = ((- q), (- p))
(p, q) = context.sample(sample_args, [p, q])
if is_addition:
return _add_question_or_entity(context, p, q, is_question)
else:
return _sub_question_or_entity(context, p, q, is_question) | @composition.module(number.is_integer_or_rational_or_decimal)
def add_or_sub(value, sample_args, context=None):
is_question = (context is None)
if (context is None):
context = composition.Context()
is_addition = random.choice([False, True])
(entropy, sample_args) = sample_args.peel()
if (value is None):
(entropy_p, entropy_q) = _entropy_for_pair(entropy)
p = number.integer_or_decimal(entropy_p, signed=True)
q = number.integer_or_decimal(entropy_q, signed=True)
else:
entropy = max(entropy, number.entropy_of_value(value))
sampler = _value_sampler(value)
p = sampler(entropy)
if is_addition:
q = (value - p)
if random.choice([False, True]):
(p, q) = (q, p)
else:
q = (p - value)
if random.choice([False, True]):
(p, q) = ((- q), (- p))
(p, q) = context.sample(sample_args, [p, q])
if is_addition:
return _add_question_or_entity(context, p, q, is_question)
else:
return _sub_question_or_entity(context, p, q, is_question)<|docstring|>Module for adding or subtracting two values.<|endoftext|> |
62cf0998afef86cb017635c7290be7f98b862030107f8b226798aa50a3275a63 | def add_or_sub_in_base(sample_args):
'Module for addition and subtraction in another base.'
context = composition.Context()
(entropy, sample_args) = sample_args.peel()
(entropy_p, entropy_q) = _entropy_for_pair(entropy)
p = number.integer(entropy_p, signed=True)
q = number.integer(entropy_q, signed=True)
base = random.randint(2, 16)
if random.choice([False, True]):
answer = (p + q)
template = 'Di basis {base}, berapakah {p} + {q}?'
else:
answer = (p - q)
template = 'Di basis {base}, berapakah {p} - {q}?'
return example.Problem(question=example.question(context, template, base=base, p=display.NumberInBase(p, base), q=display.NumberInBase(q, base)), answer=display.NumberInBase(answer, base)) | Module for addition and subtraction in another base. | mathematics_dataset/modules/arithmetic.py | add_or_sub_in_base | Wikidepia/mathematics_dataset_id | 0 | python | def add_or_sub_in_base(sample_args):
context = composition.Context()
(entropy, sample_args) = sample_args.peel()
(entropy_p, entropy_q) = _entropy_for_pair(entropy)
p = number.integer(entropy_p, signed=True)
q = number.integer(entropy_q, signed=True)
base = random.randint(2, 16)
if random.choice([False, True]):
answer = (p + q)
template = 'Di basis {base}, berapakah {p} + {q}?'
else:
answer = (p - q)
template = 'Di basis {base}, berapakah {p} - {q}?'
return example.Problem(question=example.question(context, template, base=base, p=display.NumberInBase(p, base), q=display.NumberInBase(q, base)), answer=display.NumberInBase(answer, base)) | def add_or_sub_in_base(sample_args):
context = composition.Context()
(entropy, sample_args) = sample_args.peel()
(entropy_p, entropy_q) = _entropy_for_pair(entropy)
p = number.integer(entropy_p, signed=True)
q = number.integer(entropy_q, signed=True)
base = random.randint(2, 16)
if random.choice([False, True]):
answer = (p + q)
template = 'Di basis {base}, berapakah {p} + {q}?'
else:
answer = (p - q)
template = 'Di basis {base}, berapakah {p} - {q}?'
return example.Problem(question=example.question(context, template, base=base, p=display.NumberInBase(p, base), q=display.NumberInBase(q, base)), answer=display.NumberInBase(answer, base))<|docstring|>Module for addition and subtraction in another base.<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.